Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 146/869 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • How to store Hierarchical K-Means tree for a large number of images, using Opencv?

    - by AquaAsh
    I am trying to make a program that will find similar images from a dataset of images. The steps are 1)extract SURF descriptors for all images 2)store the descriptors 3)Apply knn on the stored descriptors 4)Match the stored descriptors to the query image descriptor using KNN Now each images SURF descriptor will be stored as Hierarchical k means tree, now do I store each tree as a separate file or is it possible to build some sort of single tree with all the images descriptors and updated as images are added to dataset. This is the paper I am basing the program on www.ijest.info/docs/IJEST10-02-03-13.pdf.

    Read the article

  • how to animate a menu item into a large div (window) using jquery's animate?

    - by ijjo
    i'm pretty sure this can be done pretty easily with jquery's animate api, but i'm not good enough to figure it out. what i want to do is this: i have a menu item at the top of the viewport that the user will click on. when the user clicks on it, you'll see something that looks like the div "popping" out of the menu and float to a particular location on the screen. when i say popping i don't mean anything fancy - i just mean it appears to be originating from the menu item and settling somewhere on the screen that i specify. but the important part is that this animation happens really fast. fast enough where you don't have to actually wait for the window to appear, but slow enough so the eye sees the animation start from the menu item and end up at a new location where the window will actually appear, and appear with a specify height and width. hope that all made sense?

    Read the article

  • Int - number too large. How to get program to fail?

    - by Dave
    Hi Problem: How do you get a program to fail if a number goes beyond the bounds of its type? This code below gives the wrong answer for sum of primes under 2 million as I'd used an int instead of a long. [TestMethod] public void CalculateTheSumOfPrimesBelow2million() { int result = PrimeTester.calculateTheSumOfPrimesBelow(2000000); // 2million Assert.AreEqual(1, result); // 1,179,908,154 .. this was what I got with an int... // correct answer was 142,913,828,922 with a long } public static class PrimeTester { public static int calculateTheSumOfPrimesBelow(int maxPrimeBelow) { // we know 2 is a prime number int sumOfPrimes = 2; int currentNumberBeingTested = 3; while (currentNumberBeingTested < maxPrimeBelow) { double squareRootOfNumberBeingTested = (double)Math.Sqrt(currentNumberBeingTested); bool isPrime = true; for (int i = 2; i <= squareRootOfNumberBeingTested; i++) { if (currentNumberBeingTested % i == 0) { isPrime = false; break; } } if (isPrime) sumOfPrimes += currentNumberBeingTested; currentNumberBeingTested += 2; // as we don't want to test even numbers } return sumOfPrimes; } }

    Read the article

  • Fast inter-process (inter-threaded) communications IPC on large multi-cpu system.

    - by IPC
    What would be the fastest portable bi-directional communication mechanism for inter-process communication where threads from one application need to communicate to multiple threads in another application on the same computer, and the communicating threads can be on different physical CPUs). I assume that it would involve a shared memory and a circular buffer and shared synchronization mechanisms. But shared mutexes are very expensive (and there are limited number of them too) to synchronize when threads are running on different physical CPUs.

    Read the article

  • database design: table with large amount of columns (50+) or many sub tables with small amount of co

    - by Guillaume
    In our oroject we already have a lots of tables (100+). Some of them contains a lot of columns (50-100) and we are facing the need of adding more columns from time to time. What do you think is best - from maintenance and performance point of view - to split these huge tables in smaller entities or to keep the tables the way they are ? We are using an ORM tools, so we don't need to write custom request.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Is it a good idea to apply some basic macros to simplify code in a large project?

    - by DoctorT
    I've been working on a foundational c++ library for some time now, and there are a variety of ideas I've had that could really simplify the code writing and managing process. One of these is the concept of introducing some macros to help simplify statements that appear very often, but are a bit more complicated than should be necessary. For example, I've come up with this basic macro to simplify the most common type of for loop: #define loop(v,n) for(unsigned long v=0; v<n; ++v) This would enable you to replace those clunky for loops you see so much of: for (int i = 0, i < max_things; i++) With something much easier to write, and even slightly more efficient: loop (i, max_things) Is it a good idea to use conventions like this? Are there any problems you might run into with different types of compilers? Would it just be too confusing for someone unfamiliar with the macro(s)?

    Read the article

  • How can I efficiently retrieve a large number of database settings as PHP variables?

    - by Steven
    Currently all of my script's settings are located in a PHP file which I 'include'. I'm in the process of moving these settings (about 100) to a database table called 'settings'. However I'm struggling to find an efficient way of retrieving all of them into the file. The settings table has 3 columns: ID (autoincrements) name value Two example rows might be: 1 admin_user john 2 admin_email_address [email protected] The only way I can think of retrieving each setting is like this: $result = mysql_query("SELECT value FROM settings WHERE name = 'admin_user'"); $row = mysql_fetch_array($result); $admin_user = $row['value']; $result = mysql_query("SELECT value FROM settings WHERE name = 'admin_email_address'"); $row = mysql_fetch_array($result); $admin_email_address = $row['value']; etc etc Doing it this way will take up a lot of code and will likely be slow. Is there a better way? Thanks.

    Read the article

  • How to access a subset of XML data in Java when the XML data is too large to fit in memory?

    - by Michael Jones
    What I would really like is a streaming API that works sort of like StAX, and sort of like DOM/JDom. It would be streaming in the sense that it would be very lazy and not read things in until needed. It would also be streaming in the sense that it would read everything forwards (but not backwards). Here's what code that used such an API would look like. URL url = ... XMLStream xml = XXXFactory(url.inputStream()) ; // process each <book> element in this document. // the <book> element may have subnodes. // You get a DOM/JDOM like tree rooted at the next <book>. while (xml.hasContent()) { XMLElement book = xml.getNextElement("book"); processBook(book); } Does anything like this exist?

    Read the article

  • Is it possible to create a C++ factory system that can create an instance of any "registered" object

    - by chrensli
    Hello, I've spent my entire day researching this topic, so it is with some scattered knowledge on the topic that i come to you with this inquiry. Please allow me to describe what I am attempting to accomplish, and maybe you can either suggest a solution to the immediate question, or another way to tackle the problem entirely. I am trying to mimic something related to how XAML files work in WPF, where you are essentially instantiating an object tree from an XML definition. If this is incorrect, please inform. This issue is otherwise unrelated to WPF, C#, or anything managed - I solely mention it because it is a similar concept.. So, I've created an XML parser class already, and generated a node tree based on ObjectNode objects. ObjectNode objects hold a string value called type, and they have an std::vector of child ObjectNode objects. The next step is to instantiate a tree of objects based on the data in the ObjectNode tree. This intermediate ObjectNode tree is needed because the same ObjectNode tree might be instantiated multiple times or delayed as needed. The tree of objects that is being created is such that the nodes in the tree are descendants of a common base class, which for now we can refer to as MyBase. Leaf nodes can be of any type, not necessarily derived from MyBase. To make this more challenging, I will not know what types of MyBase derived objects might be involved, so I need to allow for new types to be registered with the factory. I am aware of boost's factory. Their docs have an interesting little design paragraph on this page: o We may want a factory that takes some arguments that are forwarded to the constructor, o we will probably want to use smart pointers, o we may want several member functions to create different kinds of objects, o we might not necessarily need a polymorphic base class for the objects, o as we will see, we do not need a factory base class at all, o we might want to just call the constructor - without #new# to create an object on the stack, and o finally we might want to use customized memory management. I might not be understanding this all correctly, but that seems to state that what I'm trying to do can be accomplished with boost's factory. But all the examples I've located, seem to describe factories where all objects are derived from a base type. Any guidance on this would be greatly appreciated. Thanks for your time!

    Read the article

  • What's a good way to add a large number of small floats together?

    - by splicer
    Say you have 100000000 32-bit floating point values in an array, and each of these floats has a value between 0.0 and 1.0. If you tried to sum them all up like this result = 0.0; for (i = 0; i < 100000000; i++) { result += array[i]; } you'd run into problems as result gets much larger than 1.0. So what are some of the ways to more accurately perform the summation?

    Read the article

  • How do I make use of multiple cores in Large SQL Server Queries?

    - by Jonathan Beerhalter
    I have two SQL Servers, one for production, and one as an archive. Every night, we've got a SQL job that runs and copies the days production data over to the archive. As we've grown, this process takes longer and longer and longer. When I watch the utilization on the archive server running the archival process, I see that it only ever makes use of a single core. And since this box has eight cores, this is a huge waste of resources. The job runs at 3AM, so it's free to take any and all resources it can find. So what I need to do if figure out how to structure SQL Server jobs so they can take advantage of multiple cores, but I can't find any literature on tackling this problem. We're running SQL Server 2005, but I could certainly push for an upgrade if 2008 takes of this problem.

    Read the article

  • jQuery UI Calendar displays too large, would like the demo size???

    - by Phill Pafford
    So I downloaded a custom themed UI for jQuery and added the calendar control to my sight (Example: link text). In the example it shows/displays the size I would like but on my webpage it's about twice the size. why??? I do have a ton of other CSS but I don't have control over the look and feel of the page (Can't touch current CSS, MEH!!). Is there a way to get the demo look on my site? I think this is the code that jQuery UI has that might be complicating things /* Component containers ----------------------------------*/ .ui-widget { font-family: Arial, Helvetica, Verdana, sans-serif; font-size: 1.1em; } .ui-widget input, .ui-widget select, .ui-widget textarea, .ui-widget button { font-family: Arial, Helvetica, Verdana, sans-serif; font-size: 1em; } .ui-widget-content { border: 1px solid #B9C4CE; background: #ffffff url(../images/ui-bg_flat_75_ffffff_40x100.png) 50% 50% repeat-x; color: #616161; } .ui-widget-content a { color: #616161; } .ui-widget-header { border: 1px solid #467AA7; background: #467AA7 url(../images/ui-bg_highlight-soft_75_467AA7_1x100.png) 50% 50% repeat-x; color: #fff; font-weight: bold; } .ui-widget-header a { color: #fff; } It's part of the Custom UI CSS

    Read the article

  • Parallelize or vectorize all-against-all operation on a large number of matrices?

    - by reve_etrange
    I have approximately 5,000 matrices with the same number of rows and varying numbers of columns (20 x ~200). Each of these matrices must be compared against every other in a dynamic programming algorithm. In this question, I asked how to perform the comparison quickly and was given an excellent answer involving a 2D convolution. Serially, iteratively applying that method, like so list = who('data_matrix_prefix*') H = cell(numel(list),numel(list)); for i=1:numel(list) for j=1:numel(list) if i ~= j eval([ 'H{i,j} = compare(' char(list(i)) ',' char(list(j)) ');']); end end end is fast for small subsets of the data (e.g. for 9 matrices, 9*9 - 9 = 72 calls are made in ~1 s). However, operating on all the data requires almost 25 million calls. I have also tried using deal() to make a cell array composed entirely of the next element in data, so I could use cellfun() in a single loop: # who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data. nextData = cell(k,1); for i=1:k [nextData{:}] = deal(data{i}); H{:,i} = cellfun(@compare,data,nextData,'UniformOutput',false); end Unfortunately, this is not really any faster, because all the time is in compare(). Both of these code examples seem ill-suited for parallelization. I'm having trouble figuring out how to make my variables sliced. compare() is totally vectorized; it uses matrix multiplication and conv2() exclusively (I am under the impression that all of these operations, including the cellfun(), should be multithreaded in MATLAB?). Does anyone see a (explicitly) parallelized solution or better vectorization of the problem?

    Read the article

  • How can I parse free text (Twitter tweets) against a large database of values?

    - by user136416
    Hi there Suppose I have a database containing 500,000 records, each representing, say, an animal. What would be the best approach for parsing 140 character tweets to identify matching records by animal name? For instance, in this string... "I went down to the woods to day and couldn't believe my eyes: I saw a bear having a picnic with a squirrel." ... I would like to flag up the words "bear" and "squirrel", as they appear in my database. This strikes me as a problem that has probably been solved many times, but from where I'm sitting it looks prohibitively intensive - iterating over every db record checking for a match in the string is surely a crazy way to do it. Can anyone with a comp sci degree put me out of my misery? I'm working in C# if that makes any difference. Cheers!

    Read the article

  • To use an api or store a large dataset in a rails app?

    - by Dave
    Hi all- I am working on a site that has the potential to need a LOT of space. Basically we hope to have every video game every created stored in a database along with an image of the cover. There are some api's out there that might be able to help, like GiantBomb's (www.giantbomb.com). We are trying to decide whether to store the data locally and if so where to find that comprehensive a list, or make calls to the api on demand. The problem with the latter is likely latency and also downtime problems. Assuming we want to store it locally here are the questions: 1) Where can we find this kind of data (yes, I looked on google, and no I couldnt find anything:)) 2) What is the most efficient way to encode and store the images? Thanks!

    Read the article

  • CMS for a fairly large Mobile Website - Please Help Select.

    - by Vinod
    I am looking for :- A mature, scalable and proven CMS solution With Support for Mobilization (Android and iPhone) Good Amount of Customization using Java / .NET Lots of out of the box components to choose from. Please help with recommendations. p.s Are there any Mobile CMS providers which works in a SaaS model?

    Read the article

  • Optimal diff between object lists in Java

    - by Philipp
    I have a List of Java objects on my server which is sent to the client through some serialization mechanism. Once in a while the List of objects gets updated on the server, that is, some objects get added, some get deleted and others just change their place in the List. I want to update the List on the client side as well, but send the least possible data. Especially, I don't want to resend Objects which are already available on the client. Is there a library available which will produce some sort of diff from the two lists, so that I can only send the difference and the new Objects accross the wire? I have found several Java implementation of the unix diff command, but this algorithm is unpractical for order changes. ie. [A,B,C] - [C,B,A] could be sent as only place changes [1-3] [3-1], while diff will want to resend the whole A and C objects (as far as I understand).

    Read the article

  • What is the best way to partition large tables in SQL Server?

    - by RyanFetz
    In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two seperate databases with a view on the main database which unioned the two seperate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirkly things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?

    Read the article

  • (PHP) How do I get XMLReader behave like SimpleXML with Xpath? (Large directory like XML file)

    - by AESM
    So, I've got this huge XML file (10MB and up) that I want to parse and I figured instead of using SimpleXML, I'd better use XMLReader. Since the performance should be way better, right? But since XMLReader doesn't work with XPath... The xml is like this: <root name="bookmarks"> * <dir name="A directory"> <link name="blablabla"> <dir name="Sub directory"> ... </dir> * </dir> * <link name="another link"> </root> With SimpleXML combined with Xpath, I would simple do like: $xml = simplexml_load_file('/xmlFile.xml'); $xml-xpath('/root[@name]/dir[@name="A directory"]/dir[@name="Sub directory"]'); Which is so simple. But how do I do this with just using XMLReader? Ps. I'm converting the resulting node to DOM/SimpleXML to get its inner contents. Like this: $node = $xr-expand(); $dom = new DomDocument(); $n = $dom-importNode($node, true); $dom-appendChild($n); $selectedRoot = simplexml_import_dom($dom); Is this ok? ... Thanks!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >