Search Results

Search found 24132 results on 966 pages for 'non clustered index'.

Page 212/966 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Different Azure blob streams when using .Net client vs. REST interface

    - by knightpfhor
    I have encountered an unusual difference in the way that the .Net client for Azure and the direct REST API bring back streams of binary data. If I use the CloundBlob.DownloadToStream() vs. getting the response stream from the HTTP response, I get streams with the same length, but different content. Specifically the REST response seems to 0 out a series of bytes. I've discovered this issue because I'm trying to use the byte range feature for blobs which is currently not supported in the .Net client (if I'm wrong on this point and someone can point at where I can do this it might make the rest of this question irrelevant). If I upload a binary representation of the first 2k unicode characters with this code: Public Sub WriteFoo() Dim Blob As CloudBlob Dim Stream1 As MemoryStream Dim Container As CloudBlobContainer Dim Builder As StringBuilder Dim NextCharacter As String Dim Formatter As BinaryFormatter Container = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient.GetContainerReference("testcontainer") Container.CreateIfNotExist() Blob = Container.GetBlobReference("Foo") Stream1 = New MemoryStream() Builder = New Text.StringBuilder() For Index As Integer = 1 To 2000 Select Case Index Case Is <= 9 NextCharacter = ChrW(9) Case Is <= 31 NextCharacter = Environment.NewLine Case 127 NextCharacter = Environment.NewLine Case Else NextCharacter = ChrW(Index) End Select Builder.Append(NextCharacter) Next Formatter = New BinaryFormatter() Formatter.Serialize(Stream1, Builder.ToString()) Stream1.Position = 0 Blob.UploadFromStream(Stream1) End Sub Then try to access it with the following code: Public Sub ReadFoo() Dim Blob As CloudBlob Dim Request As System.Net.HttpWebRequest Dim Response As System.Net.WebResponse Dim ResponseSize As Integer Dim ResponseBuffer As Byte() Dim ResponseStream As Stream Dim Stream1 As MemoryStream Dim Stream2 As MemoryStream Dim Container As CloudBlobContainer Dim Byte1 As Integer Dim Byte2 As Integer Container = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient.GetContainerReference("testcontainer") Container.CreateIfNotExist() Blob = Container.GetBlobReference("Foo") Stream1 = New MemoryStream() Stream2 = New MemoryStream() Blob.DownloadToStream(Stream1) Request = DirectCast(System.Net.WebRequest.Create(Blob.Uri), System.Net.HttpWebRequest) Request.Headers.Add("x-ms-version", "2009-09-19") Request.Headers.Add("x-ms-range", String.Format("bytes={0}-{1}", 0, Integer.MaxValue)) Blob.Container.ServiceClient.Credentials.SignRequest(Request) Response = Request.GetResponse() ResponseStream = Response.GetResponseStream() ResponseSize = CInt(Response.ContentLength) ReDim ResponseBuffer(ResponseSize - 1) ResponseStream.Read(ResponseBuffer, 0, ResponseSize) Stream2.Write(ResponseBuffer, 0, ResponseSize) Stream1.Position = 0 Stream2.Position = 0 If Stream1.Length <> Stream2.Length Then System.Diagnostics.Debug.WriteLine(String.Format("Streams a different length. 1: {0}. 2: {1}", Stream1.Length, Stream2.Length)) Else While Stream1.Position < Stream1.Length Byte1 = Stream1.ReadByte() Byte2 = Stream2.ReadByte() If Byte1 <> Byte2 Then System.Diagnostics.Debug.WriteLine(String.Format("Streams differ at position {0}, 1: {1}. 2: {2}", Stream1.Position - 1, Byte1, Byte2)) End If End While End If End Sub Past all certain point all of the data in Stream2 (the data I've retrieved from the REST api) ends up being 0. To make matters even more confusing, when I reverse the order that I put the characters in the string e.g. For Index As Integer = 2000 To 1 rather than For Index As Integer = 1To 2000 it all works OK. Any help is much appreciated. My computer is sick of me swearing at it.

    Read the article

  • PHP5.3 is not working with MySQL5.1 IIS7 Times out

    - by Thorn007
    I have set up PHP5.3, MySQL5.1, and IIS7 on Window 7 but php doesn't want to work with MySQL. I'm assuming it is a configuration error or an incomplete install on my part. MySQL5.1 is working PHP5.3 is working, phpinfo() shows info and that i have enabled MySQL IIS is setup and using fastCgiModule to run PHP IIS registers php.ini updates port 3306 is firewall free and open to the world php.ini is configured correctly I have added c:\php to the Windows systems PATH In the past I remember moving a file, libmysql.dll, to System32 but I doesn't look like that come with php5.3.1, as the driver comes built in now http://us3.php.net/manual/en/mysqlnd.install.php. (This has been giving me so much trouble I have been documenting my findings on my blog as http://inteldesigner.com/2010/code/having-problems-getting-php5-3-to-work-with-mysql5-1 ) NEED: I need to install PHP manually, don't want to use the quick installer or an older version I need to get PHP5.3 to work with MySQL5.1 so i can install Wordpress2.9 and Drupal7a Any links or suggestion would be great, I have already done everything on the iis web site, nothing is working. I'm guessing they have not updated for new software. BUGS/SOLUTION: The solution is here: http://bugs.php.net/bug.php?id=50172 thanks go to don.raman on the iis.net forums http://forums.iis.net/p/1164911/1933894.aspx SYMPTOMS: The php function mysql_connect() in conjunction with php5.3 locks up sever and returns error 500. (IPv6 is the problem see above link) TEST CODE: <?php $con = mysql_connect("localhost","root","***"); if (!$con) { die('Could not connect: ' . mysql_error()); } // some code mysql_close($con); ?> ERRORS: From Browser: HTTP Error 500.0 - Internal Server Error C:\php\php-cgi.exe - The FastCGI process exceeded configured activity timeout When i run php -f c:\public_html\index.php from the command line i got: PHP Warning: mysql_connect(): [2002] A connection attempt failed because the co nnected party did not (trying to connect via tcp://localhost:3306) in C:\public _html\index.php on line 10 Warning: mysql_connect(): [2002] A connection attempt failed because the connect ed party did not (trying to connect via tcp://localhost:3306) in C:\public_html \index.php on line 10 PHP Warning: mysql_connect(): A connection attempt failed because the connected party did not properly respond after a period of time, or established connectio n failed because connected host has failed to respond. in C:\public_html\index.php on line 10 Warning: mysql_connect(): A connection attempt failed because the connected part y did not properly respond after a period of time, or established connection fai led because connected host has failed to respond. in C:\public_html\index.php on line 10 Could not connect: A connection attempt failed because the connected party did n ot properly respond after a period of time, or established connection failed bec ause connected host has failed to respond. C:\Users\Kevin>

    Read the article

  • link_to passing paramater and display problem - tag feature - Ruby on Rails

    - by bgadoci
    I have gotten a great deal of help from KandadaBoggu on my last question and very very thankful for that. As we were getting buried in the comments I wanted to break this part out. I am attempting to create a tag feature on the rails blog I am developing. The relationship is Post has_many :tags and Tag belongs_to :post. Adding and deleting tags to posts are working great. In my /view/posts/index.html.erb I have a section called tags where I am successfully querying the Tags table, grouping them and displaying the count next to the tag_name (as a side note, I mistakenly called the column containing the tag name, 'tag_name' instead of just 'name' as I should have) . In addition the display of these groups are a link that is referencing the index method in the PostsController. That is where the problem is. When you navigate to /posts you get an error because there is no parameter being passed (without clicking the tag group link). I have the .empty? in there so not sure what is going wrong here. Here is the error and code: Error You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.empty? /views/posts/index.html.erb <% @tag_counts.each do |tag_name, tag_count| %> <tr> <td><%= link_to(tag_name, posts_path(:tag_name => tag_name)) %></td> <td>(<%=tag_count%>)</td> </tr> <% end %> PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name].empty? ? {}: { :tags => { :tag_name => params[:tag_name] }} ) ) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • html widget communicating with server

    - by Nikita Rybak
    I'm making html widget for websites. Let's say, it will display current stock indexes. In short, arbitrary website owner takes code snippet from me and includes it on his webpage http://website.com/index.html. When arbitrary user opens http://website.com/index.html, my code sends request to my server (provider.com), which performs necessary operations and returns information to user's browser. When response has arrived, user will see relevant stock value on http://website.com/index.html. In index.html service could be called like this <script type="text/javascript" src="provider.com/service.js"> </script> <div id="target_area"></div> <script type="text/javascript"> service.show("target_area", options); </script> Now, the problem is in the same origin policy: I can't just send ajax request from website.com to provided.com and return html to embed in client's webpage. I see several solutions, which I list below, but none quite satisfy me. I wonder, if you could suggest something, especially if you had some relevant experience. 1) iframe, plain and simple. Disadvantage: must have fixed dimensions + stupid scroll bars appearing in some browsers. Can be fixed with javascript, but all this browser-specific tinkering doesn't sound good to me. 2) JSONP. Problem: can't return whole chunk of html, must return only data. Then, on browser side, I'll have to use javascript to embed data into html snippet placed statically in index.html. Doesn't sound nice, because data format is not very simple and may even change later. 3) Use hidden iframe to do ajax requests. A bit tricky, but sounds like a way to go. Well, that's my thoughts on the subject. Are there any better ways? BTW, I tried to check some existing widgets too, but didn't find much useful information. All domain names used in this text are fictional and any resemblance is purely coincidental :)

    Read the article

  • CSS: background image does not fill when scrolling

    - by rekindleMyLoveOf
    Hi, working on a very small site which loads in one go, so there is a div which holds all the background images, and on top of that (i.e. higher z-index) there is a content div which holds everything. I can switch backgrounds easily based on what content is selected. Unfortunately, I noticed if you launch in a small window so that scrollbars appear, if you scroll there is no background image in the 'revealed' portions of the page. :-( Page structure: <body> <div id="bg"> <div class="bgone"></div> <div class="bgtwo"></div> </div> <div id="container"> <!-- content panels here --> </div> </body> css: #bg { margin: 0px; position: absolute; top: 0px; left: 0px; width:100%; height: 1024px; z-index:1; } .bgone { margin: 0px; position: absolute; width:100%; height: 1024px; background-image:url(../images/one.jpg); background-position:top; background-repeat:repeat-x; z-index:2; } .bgtwo { margin: 0px; position: absolute; width:100%; height: 1024px; background-image:url(../images/two.jpg); background-position:top; background-repeat:repeat-x; z-index:3; } #container { position:relative; width:900px; padding:0px; margin:0px auto; height:600px; z-index:10; }

    Read the article

  • Handling file renames in git

    - by Greg K
    I'd read that when renaming files in git, you should commit any changes, perform your rename and then stage your renamed file. Git will recognise the file from the contents, rather than seeing it as a new untracked file, and keep the change history. However, doing just this tonight I ended up reverting to git mv. > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: index.html # Rename my stylesheet in Finder from iphone.css to mobile.css > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: index.html # # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # deleted: css/iphone.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # css/mobile.css So git now thinks I've deleted one CSS file, and added a new one. Not what I want, lets undo the rename and let git do the work. > $ git reset HEAD . Unstaged changes after reset: M css/iphone.css M index.html Back to where I began. > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: index.html # Lets use git mv instead. > $ git mv css/iphone.css css/mobile.css > $ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # renamed: css/iphone.css -> css/mobile.css # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: index.html # Looks like we're good. So why didn't git recognise the rename the first time around when I used Finder?

    Read the article

  • Isn't INT more efficient than UNIQUEIDENTIFIER?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation?

    Read the article

  • Advice on logic circuits and serial communications

    - by Spencer Ruport
    As far as I understand the serial port so far, transferring data is done over pin 3. As shown here: There are two things that make me uncomfortable about this. The first is that it seems to imply that the two connected devices agree on a signal speed and the second is that even if they are configured to run at the same speed you run into possible synchronization issues... right? Such things can be handled I suppose but it seems like there must be a simpler method. What seems like a better approach to me would be to have one of the serial port pins send a pulse that indicates that the next bit is ready to be stored. So if we're hooking these pins up to a shift register we basically have: (some pulse pin)-clk, tx-d Is this a common practice? Is there some reason not to do this? EDIT Mike shouldn't have deleted his answer. This I2C (2 pin serial) approach seems fairly close to what I did. The serial port doesn't have a clock you're right nobugz but that's basically what I've done. See here: private void SendBytes(byte[] data) { int baudRate = 0; int byteToSend = 0; int bitToSend = 0; byte bitmask = 0; byte[] trigger = new byte[1]; trigger[0] = 0; SerialPort p; try { p = new SerialPort(cmbPorts.Text); } catch { return; } if (!int.TryParse(txtBaudRate.Text, out baudRate)) return; if (baudRate < 100) return; p.BaudRate = baudRate; for (int index = 0; index < data.Length * 8; index++) { byteToSend = (int)(index / 8); bitToSend = index - (byteToSend * 8); bitmask = (byte)System.Math.Pow(2, bitToSend); p.Open(); p.Parity = Parity.Space; p.RtsEnable = (byte)(data[byteToSend] & bitmask) > 0; s = p.BaseStream; s.WriteByte(trigger[0]); p.Close(); } } Before anyone tells me how ugly this is or how I'm destroying my transfer speeds my quick answer is I don't care about that. My point is this seems much much simpler than the method you described in your answer nobugz. And it wouldn't be as ugly if the .Net SerialPort class gave me more control over the pin signals. Are there other serial port APIs that do?

    Read the article

  • Error in creating template class

    - by Luciano
    I found this vector template class implementation, but it doesn't compile on XCode. Header file: // File: myvector.h #ifndef _myvector_h #define _myvector_h template <typename ElemType> class MyVector { public: MyVector(); ~MyVector(); int size(); void add(ElemType s); ElemType getAt(int index); private: ElemType *arr; int numUsed, numAllocated; void doubleCapacity(); }; #include "myvector.cpp" #endif Implementation file: // File: myvector.cpp #include <iostream> #include "myvector.h" template <typename ElemType> MyVector<ElemType>::MyVector() { arr = new ElemType[2]; numAllocated = 2; numUsed = 0; } template <typename ElemType> MyVector<ElemType>::~MyVector() { delete[] arr; } template <typename ElemType> int MyVector<ElemType>::size() { return numUsed; } template <typename ElemType> ElemType MyVector<ElemType>::getAt(int index) { if (index < 0 || index >= size()) { std::cerr << "Out of Bounds"; abort(); } return arr[index]; } template <typename ElemType> void MyVector<ElemType>::add(ElemType s) { if (numUsed == numAllocated) doubleCapacity(); arr[numUsed++] = s; } template <typename ElemType> void MyVector<ElemType>::doubleCapacity() { ElemType *bigger = new ElemType[numAllocated*2]; for (int i = 0; i < numUsed; i++) bigger[i] = arr[i]; delete[] arr; arr = bigger; numAllocated*= 2; } If I try to compile as is, I get the following error: "Redefinition of 'MyVector::MyVector()'" The same error is displayed for every member function (.cpp file). In order to fix this, I removed the '#include "myvector.h"' on the .cpp file, but now I get a new error: "Expected constructor, destructor, or type conversion before '<' token". A similar error is displayed for every member as well. Interestingly enough, if I move all the .cpp code to the header file, it compiles fine. Does that mean I can't implement template classes in separate files?

    Read the article

  • Heroku Push Problem part 2 - Postgresql - PGError Relations does not exist - Ruby on Rails

    - by bgadoci
    Ok so got through my last problem with the difference between Postgresql and SQLite and seems like Heroku is telling me I have another one. I am new to ruby and rails so a lot of this stuff I can't decipher at first. Looking for a little direction here. The error message and PostsController Index are below. I checked my routes.rb file and all seems well there but I could be missing something. I will post if you need. Processing PostsController#index (for 99.7.50.140 at 2010-04-23 15:19:22) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "tags" does not exist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"tags"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): PostsController#index def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id ", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • Armchair Linguists: 'code' vs. 'codes'--or why I write 'code' and my manager asks for 'codes'

    - by Ukko
    I wanted to tap into the collective wisdom here to see if I can get some insight into one of my pet peeves, people who thread "code" as a countable noun. Let me also preface this by saying that I am not talking about anyone who speaks english as a second language, this is a native phenomenon. For those of us who slept through grammar class there are two classes of nouns which basically refer to things that are countable and non-countable (sometimes referred to as count and noncount). For instance 'sand' is a non-count noun and 'apple' is count. You can talk about "two apples" but "two sands" does not parse. The bright students then would point out a word like "beer" where is looks like this is violated. Beer as a substance is certainly a non-count noun, but I can ask for "two beers" without offending the grammar police. The reason is that there are actually two words tied up in that one utterance, Definition #1 is a yummy golden substance and Definition #2 is a colloquial term for a container of said substance. #1 is non-count and #2 is countable. This gets to my problem with "codes" as a countable noun. In my mind the code that we programmers write is non-count, "I wrote some code today." When used in the plural like "Have you got the codes" I can only assume that you are asking if I have the cryptographically significant numbers for launching a missile or the like. Every time my peer in marketing asks about when we will have the new codes ready I have a vision of rooms of code breakers going over the latest Enigma coded message. I corrected the usage in all the documents I am asked to review, but then I noticed that our customer was also using the work "codes" when they meant "code". At this point I have realized that there is a significant sub-population that uses "codes" and they seem to be impervious to what I see as the dominant "correct" usage. This is the part I want some help on, has anyone else noticed this phenomenon? Do you know what group it is associated with, old Fortran programmer perhaps? Is it a regionalism? I have become quick to change my terms when I notice a customer's usage, but it would be nice to know if I am sending a proposal somewhere what style they expect. I would hate to get canned with a review of "Ha, these guy's must be morons they don't even know 'code' is plural!"

    Read the article

  • get current selected button cell placed inside tableview using cocoa

    - by Swati
    hi i have a NSTableView i have two columns A and B B contains some data A contains custom button the button is added to column A using this: Below code is placed inside awakeFromNib method NSButtonCell *buttonCell = [[[NSButtonCell alloc] init] autorelease]; [buttonCell setBordered:NO]; [buttonCell setImagePosition:NSImageOnly]; [buttonCell setButtonType:NSMomentaryChangeButton]; [buttonCell setImage:[NSImage imageNamed:@"uncheck.png"]]; [buttonCell setSelectable:TRUE]; [buttonCell setTag:100]; [buttonCell setTarget:self]; [buttonCell setAction:@selector(selectButtonsForDeletion:)]; [[myTable tableColumnWithIdentifier:@"EditIdentifier"] setDataCell:buttonCell]; Some code in display cell of nstableview: -(void)tableView:(NSTableView *)tableView willDisplayCell:(id)cell forTableColumn:(NSTableColumn *)tableColumn row:(NSInteger)rowIndex { if(tableView == myTable) { if([[tableColumn identifier] isEqualToString:@"DataIdentifier"]) { } else if([[tableColumn identifier] isEqualToString:@"EditIdentifier"]) { NSButtonCell *zCell = (NSButtonCell *)cell; [zCell setTag:rowIndex]; [zCell setTitle:@"abc"]; [zCell setTarget:self]; [zCell setAction:@selector(selectButtonsForDeletion:)]; } } } now i want that when i click on the button the image of button cell gets changed as well as i want to do some coding. When button gets clicked then by default the tableView's reference gets passed. How can i get the button cell reference i looked here for similar problem: Cocoa: how to nest a button inside a Table View cell? but i am unable to add button inside column of NSTableView. How i change the image: - (void)tableView:(NSTableView *)tableView setObjectValue:(id)object forTableColumn:(NSTableColumn *)tableColumn row:(NSInteger)row; { if(tableView == myTable) { if([[tableColumn identifier] isEqualToString:@"EditIdentifier"]) { NSButtonCell *aCell = (NSButtonCell *)[[tableView tableColumnWithIdentifier:@"EditIdentifier"] dataCellForRow:row]; NSInteger index = [[aCell title]intValue]; if([self.selectedIndexesArray count]>0) { if(![self.selectedIndexesArray containsObject:[NSNumber numberWithInt:index]]) { [aCell setImage:[NSImage imageNamed:@"check.png"]]; [self.selectedIndexesArray addObject:[NSNumber numberWithInt:index]]; } else { [aCell setImage:[NSImage imageNamed:@"uncheck.png"]]; [self.selectedIndexesArray removeObjectAtIndex:[selectedIndexesArray indexOfObject:[NSNumber numberWithInt:index]]]; } } else { [aCell setImage:[NSImage imageNamed:@"check.png"]]; [self.selectedIndexesArray addObject:[NSNumber numberWithInt:index]]; } } } } I have debugged the code and found that proper tag and titles are passed but image applies on more than one button cell, this is too very irregular. cant understand how its working!!! Any suggestions what am i doing wrong??

    Read the article

  • How to fix this window.open memory leak?

    - by DotnetShadow
    Hi there, I was recently looking at this memory leak tool sIEve: http://home.orange.nl/jsrosman/ So I decided to test out the tool by creating a main page that will open up a popup window. I started by creating 3 pages: index.html, page1.html and page2.html, the index.html page will open a child window (popup) linking to page1.html. Page1 will have a anchor tag that links to page2.html, while page2 will have a link back to page1.html PROBLEM So in the tool I entered the index.html page, popup window opened to page1.html, I then clicked the page2 link, no leaks detected yet. While I'm on page2 I click the link back to page1, and that's where the tool claims there is a link. The leak seems to be happening on the index.html page and I have no idea as to why it would be doing that. Even more concerning is that I can see elements that the tool detects that aren't even on my page. Does anyone have any experience with this tool or know if this really is a memory leak? Any samples of showing how to achieve what I'm doing without memory leaks? INDEX.HTML <script type="text/javascript"> MYLEAK = function() { var childWindow = null; function showWindow() { childWindow = window.open("page1.html", "myWindow"); return false; } return { init: function() { $("#window-link").bind("click", showWindow); } } }(); </script> </head> <body> <a id="window-link" href="#" on>Open Window</a> <script type="text/javascript"> $(document).ready(function() { MYLEAK.init(); }); </script> </body> </html> PAGE1.HTML <html> <body> <h1>Page 1</h1> <a href="page2.html">Page2</a> </body> </html> PAGE2.HTML <html> <body> <h1>Page 2</h1> <a href="page1.html">Page1</a> </body> </html> Appreciate your efforts.

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Remove a tag type from the view (involves alphabetical pagination)

    - by user284194
    I have an index view that lists all of the tags for my Entry and Message models. I would like to only show the tags for Entries in this view. I'm using acts-as-taggable-on. Tags Controller: def index @letter = params[:letter].blank? ? 'a' : params[:letter] @tagged_entries = Tagging.find_all_by_taggable_type('Entry').map(&:taggable) @title = "Tags" if params[:letter] == '#' @data = Tag.find(@tagged_entries, :conditions => ["name REGEXP ?", "^[^a-z]"], :order => 'name', :select => "id, name") else @data = Tag.find(@tagged_entries, :conditions => ["name LIKE ?", "#{params[:letter]}%"], :order => 'name', :select => "id, name") end respond_to do |format| flash[:notice] = 'We are currently in Beta. You may experience errors.' format.html end end tags#index: <% @data.each do |t| %> <div class="tag"><%= link_to t.name.titleize, tag_path(t) %></div> <% end %> I want to show only the taggable type 'Entry' in the view. Any ideas? Thank you for reading my question. SECOND EDIT: Tags Controller: def index @title = "Tags" @letter = params[:letter].blank? ? 'a' : params[:letter] @taggings = Tagging.find_all_by_taggable_type('Entry', :include => [:tag, :taggable]) @tags = @taggings.map(&:tag).sort_by(&:name).uniq @tagged_entries = @taggings.map(&:taggable)#.sort_by(&:id)#or whatever if params[:letter] == '#' @data = Tag.find(@tags, :conditions => ["name REGEXP ?", "^[^a-z]"], :order => 'name', :select => "id, name") else @data = Tag.find(@tags, :conditions => ["name LIKE ?", "#{params[:letter]}%"], :order => 'name', :select => "id, name") end respond_to do |format| format.html end end tags#index: <% @data.each do |t| %> <div class="tag"><%= link_to t.name.titleize, tag_path(t) %></div> <% end %> Max Williams' code works except when I click on my alphabetical pagination links. The error I'm getting [after I clicked on the G link of the alphabetical pagination] reads: Couldn't find all Tags with IDs (77,130,115,...) AND (name LIKE 'G%') (found 9 results, but was looking for 129) Can anyone point me in the right direction?

    Read the article

  • How to call Ajax to run a PHP file while maintaining PHP & Javascript variables.

    - by Umar
    Hi Stackoverflowers. I'm using the Facebook php-sdk to get the users name and friends, right now the loading friends part takes about +3 seconds so I wanted to do it via Ajax, e.g. so the document can load and jQuery then calls an external PHP script which loads the friends (their names and their profile pictures). So to do this I did: $(document).ready(function() { var loadUrl = "http://localhost/fb/getFriends.php" ; $("#friends") .html("Hold on, your friends are loading!") .load(loadUrl); }); But I get a PHP error: Fatal error: Call to a member function api() on a non-object If I do this in the same PHP file (so I don't use Ajax at all to call it) it works fine. Now I think I understand the reason this is happening, but I don't know how to fix it. In my main index.php file I have a bunch of init and session code e.g. FB.init({ appId : '<?php echo $facebook->getAppId(); ?>', session : <?php echo json_encode($session); ?>, // don't refetch the session when PHP already has it status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); So I'm just wondering what is the best way to treat my new separate PHP file getFriends.php in a way where it has access to all PHP/JavaScript session data/variables? If you haven't used the Facebook php-sdk I'll quickly explain what I mean: Lets say I have index.php and getUsername.php, from index.php I want to retrieve the getUsername.php file via Ajax using .load. Now the problem is getUsername.php needs to access PHP session data/Javascript Init functions which were created in index.php, so I'm thinking of ways to solve this (I'm new to PHP so sorry if this sounds silly) but I'm thinking maybe I could do a POST in jQuery Ajax and post the session data? Or maybe I could create a PHP class, so something like: class getUsername extends index{} /*Yes I'm a newbie*/ If you have a look at the php-sdk example.php link posted at the top maybe you'd better understand what variables exactly need to be accessed from a new file. Also on a different note, I'm using PHP to work out page rendering times and it seems that fetching the users name alone : // Session based API call. if ($session) { try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); } } Can take a good 4 seconds, is this normal? Once I get the users details is it good to cache it or something? -Speed isn't as important right now, for now I'm just trying to figure out this Ajax-separating php files thing. Woah this is a long post. Thanks very much for your time.

    Read the article

  • SQL Server Table Partitioning, what is happening behind the scenes?

    - by user404463
    I'm working with table partitioning on extremely large fact table in a warehouse. I have executed the script a few different ways. With and without non clustered indexes. With indexes it appears to dramatically expand the log file while without the non clustered indexes it appears to not expand the log file as much but takes more time to run due to the rebuilding of the indexes. What I am looking for is any links or information as to what is happening behind the scene specifically to the log file when you split a table partition.

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Many-To-Many Query with Linq-To-NHibernate

    - by rjygraham
    Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help. I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure: CREATE TABLE [dbo].[Classes]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](100) NOT NULL, [StartDate] [datetime2](7) NOT NULL, [EndDate] [datetime2](7) NOT NULL, CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[Sections]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [ClassId] [bigint] NOT NULL, [InternalCode] [varchar](10) NOT NULL, CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[SectionStudents]( [SectionId] [bigint] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED ( [SectionId] ASC, [UserId] ASC ) CREATE TABLE [dbo].[aspnet_Users]( [ApplicationId] [uniqueidentifier] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, [UserName] [nvarchar](256) NOT NULL, [LoweredUserName] [nvarchar](256) NOT NULL, [MobileAlias] [nvarchar](16) NULL, [IsAnonymous] [bit] NOT NULL, [LastActivityDate] [datetime] NOT NULL, PRIMARY KEY NONCLUSTERED ( [UserId] ASC ) I omitted the foreign keys for brevity, but essentially this boils down to: A Class can have many Sections. A Section can belong to only 1 Class but can have many Students. A Student (aspnet_Users) can belong to many Sections. I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine. Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class. Here's what I've tried so far: 1. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.First(f => f.UserId == userId) != null select s); 2. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId select s); Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try. The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message: Value cannot be null. Parameter name: session I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way. Thanks in advance to anyone who can help. Ryan

    Read the article

  • SQL Server 2008 BULK INSERT causes more reads than writes. Why?

    - by sh1ng
    I've huge a table (a few billion rows) with a clustered index and two non-clustered indices. A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms). It's very confusing to me. Why do reads exceed writes? How can I reduce it? update query insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS) No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%) And one more thing DB stored on RAID50 with stripe size 256K

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • jQuery sortColumns plugin: How to sort correctly with rowspan

    - by Thang Pham
    Following this post jQuery table sort (github link: https://github.com/padolsey/jQuery-Plugins/blob/master/sortElements/jquery.sortElements.js), I am successfully sort columns, however it does not work in the case of rowspan: For example, case like this Grape 3,096,671M 1,642,721M Apple 2,602,750M 3,122,020M When I click on the second column, it try to sort Apple 2,602,750M 1,642,721M Grape 3,096,671M 3,122,020M which as you can see is not correct, please any jQuery guru help me fix this problem. Here is my code var inverse = false; function sortColumn(index){ index = index + 1; var table = jQuery('#resultsTable'); table.find('td').filter(function(){ return jQuery(this).index() == index; }).sortElements(function(a, b){ a = convertToNum($(a).text()); b = convertToNum($(b).text()); return ( isNaN(a) || isNaN(b) ? a > b : +a > +b ) ? inverse ? -1 : 1 : inverse ? 1 : -1; },function(){ return this.parentNode; }); inverse = !inverse; } function convertToNum(str){ if(isNaN(str)){ var holder = ""; for(i=0; i<str.length; i++){ if(!isNaN(str.charAt(i))){ holder += str.charAt(i); } } return holder; }else{ return str; } } Question: 1.How do I sort this with rowspan. THE NUMBER OF ROWSPAN IS NOT ALWAYS THE SAME. The above example both Grape and Apple have rowspan of 2, but this is not always the case. 2.Can any explain this syntax: return ( isNaN(a) || isNaN(b) ? a > b : +a > +b ) ? inverse ? -1 : 1 : inverse ? 1 : -1; So I can see that if either a or b is not a number, then do string comparison otherwise do number comparison, but I dont understand the inverse ? -1 : 1 : inverse ? 1 : -1;

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >