Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 568/677 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Update on: How to model random non-overlapping spheres of non-uniform size in a cube using Matlab?

    - by user3838079
    I am trying to use MATLAB for generating random locations for non-uniform size spheres (non-overlapping) in a cube. The for loop in the code below never seems to end. I don't know what am missing in the code. I have ran the code for no. of spheres (n) = 10; dims = [ 10 10 10 ] function [ c r ] = randomSphere( dims ) % creating one sphere at random inside [0..dims(1)]x[0..dims(2)]x... % radius and center coordinates are sampled from a uniform distribution % over the relevant domain. % output: c - center of sphere (vector cx, cy,... ) % r - radius of sphere (scalar) r = rand(1); % you might want to scale this w.r.t dims or other consideration c = r + rand( size(dims) )./( dims - 2*r ); % make sure sphere does not exceed boundaries function ovlp = nonOverlapping( centers, rads ) % check if several spheres with centers and rads overlap or not ovlp = false; if numel( rads ) == 1 return; % nothing to check for a single sphere end dst = sqrt( sum( bsxfun( @minus, permute( centers, [1 3 2] ),... permute( centers, [3 1 2] ) ).^2, 3) ); ovlp = dst >= bsxfun( @plus, rads, rads.' ); %' all distances must be smaller than r1+r2 ovlp = any( ovlp(:) ); % all must not overlap function [centers rads] = sampleSpheres( dims, n ) % dims is assumed to be a row vector of size 1-by-ndim % preallocate ndim = numel(dims); centers = zeros( n, ndim ); rads = zeros( n, 1 ); ii = 1; while ii <= n [centers(ii,:), rads(ii) ] = randomSphere( dims ); if nonOverlapping( centers(1:ii,:), rads(1:ii) ) ii = ii + 1; % accept and move on end end

    Read the article

  • Why does Ruby have Rails while Python has no central framework?

    - by yar
    This is a(n) historical question, not a comparison-between-languages question: This article from 2005 talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. Why, historically speaking, did this happen for Ruby but not for Python? (or did it happen, and that framework is Django?) Also, the hypothetical questions: would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework? [Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.] Edit: Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one in terms of popularity is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question.

    Read the article

  • MYSQL: Limit Word Length for MySql Insert

    - by elmaso
    Hi, every search query is saved in my database, but I want to Limit the Chracterlength for one single word: odisafuoiwerjsdkle -- length too much -- dont write in the database my actually code is: $search = $_GET['q']; if (!($sql = mysql_query ('' . 'SELECT * FROM `history` WHERE `Query`=\'' . $search . '\''))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; } while ($row = mysql_fetch_array ($sql)) { $ID = '' . $row['ID']; } if ($ID == '') { mysql_query ('' . 'INSERT INTO history (Query) values (\'' . $search . '\')'); } if (!($sql = mysql_query ('SELECT * FROM `history` ORDER BY `ID` ASC LIMIT 1'))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; } while ($row = mysql_fetch_array ($sql)) { $first_id = '' . $row['ID']; } if (!($sql = mysql_query ('SELECT * FROM `history`'))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; }

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • What is the complexity of this specialized sort

    - by ADB
    I would like to know the complexity (as in O(...) ) of the following sorting algorithm: there are B barrels that contains a total of N elements, spread unevenly across the barrels. the elements in each barrel are already sorted The sort takes combines all the elements from each barrel in a single sorted list: using an array of size B to store the last sorted element of each barrel (starting at 0) check each barrel (at the last stored index) and find the smallest element copy the element in the final sorted array, increment array counter increment the last sorted element for the barrel we picked from perform those steps N times or in pseudo: for i from 0 to N smallest = MAX_ELEMENT foreach b in B if bIndex[b] < b.length && b[bIndex[b]] < smallest smallest_barrel = b smallest = b[bIndex[b]] result[i] = smallest bIndex[smallest_barrel] += 1 I thought that the complexity would be O(n), but the problem I have with finding the complexity is that if B grows, it has a larger impact than if N grows because it adds another round in the B loop. But maybe that has no effect on the complexity?

    Read the article

  • What's the best technique to protect my framework from visitors who are not logged in?

    - by Hermet
    First of all, I would like to say that I have used the search box looking for a similar question and was unsuccessful, maybe because of my poor english skills. I have a a 'homemade' framework. I have certain PHP files that must only be visible for the admin. The way I currently do this is check within every single page to see if a session has been opened. If not, the user gets redirected to a 404 page, to seem like the file which has been requested doesn't exist. I really don't know if this is guaranteed to work or if there's a better and more safe way because I'm currently working with kind of confidential data that should never become public. Could you give me some tips? Or leave a link where I could find some? Thank you very much, and again excuse me for kicking the dictionary. EDIT What I usually write in the top of each file is something like this <?php include("sesion.php"); $rs=comprueba(); //'check' if ($rs==1) { ?> And then, at the end <?php } ?> Is it such a butched job, isn't it? EDIT Let's say I have a customers list in a file named customers.php That file may be currently on http://www.mydomain.com/admin/customers.php and it must only be visible for the admin user. Once the admin user has been logged in, I create a session variable. That variable is what I check on the top of each page, and if it exists, the customers list is shown. If not, the user gets redirected to the 404 page. Thank you for your patience. I really appreciate.

    Read the article

  • Create a multicolor selectbox on Symfony2

    - by Ninsuo
    I try to create the following field : <select name="test"> <option value="1">a</option> <option value="2">b</option> <option value="3" style="font-weight: bold; color: red;">c</option> <option value="4">d</option> <option value="5">e</option> </select> This creates a 2-color selectbox : But in Symfony2, I do not know how to apply a class to a single option. If in my view I do : {{ form_widget(myForm.test, { 'attr': { 'class': 'red', } }) }} Or if in my form I do : $builder->add('test', 'choice', array( 'required' => true, 'choices' => array('a', 'b', 'c', 'd', 'e'), 'attr' => array('class' => 'red'), )); The attribute is stored into the <select> tag and apply to the whole selectable values. How do I apply a class to an <option> in Symfony2 ?

    Read the article

  • How to add Flex 4 spark List Item custom properties

    - by maik3l
    I'm trying to make a simple flash application providing interface for taking tests as a high school assignment. One of the requirements is to use an XML file as data source.Now, having a List component bound to the XML file with questions consisting of data such as question body, question type (ie. single choice, multiple choice, open, image etc.) and possible answers (where applicable), I was wondering if I could add some additional data (and what is the best possible way to do so) to each question upon its transfer to the List component. I am trying to achieve two main goals with this: firstly, to mark the questions to which an answer has already been given, like with such code in ItemRenderer class: <s:Label color="{data.color}" text="{data.label}"/> where data.color would be set whenever the user gives an answer to a question. Secondly, while at it, I thought of such possibility as a great way to store answers given to particular questions. In this case, the Class of the answer object would have been Object, since there has to be many type of questions (where the answer could also be a Bitmap for example). This is a question of both how to do it and if it seems a good idea at all (and if no, if there is a better way?), because I am quite new to the whole Flash Builder and Flex thing and I am not really accustomed to all the possibilities and best practices. Thanks!

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • How to detect column conflicts with Hibernate?

    - by Slim
    So let's say I have an ArrayList full of Products that need to be committed to the database via Hibernate. There are already a large number of Products in the database. Each product has an ID. Note this is NOT the PK that is autogenerated by Hibernate. My questions is: what is the best way to detect conflicts with this ID? I am looking for a relatively efficient method of obtaining, from the the database, a List of Products that share an ID with any of the Products in my ArrayList. This is all in a single table called Products and the ID attribute is in column ProductID. The way I've done it is grabbing a list of all Products in the database, and compared each one with each entry in my ArrayList - but that is seriously inefficient and I don't think it would work well with a larger database. How should it be done? Thanks. I say "relatively" efficient because efficiency is not the primary concern, but it shouldn't take noticeably long to test against a table of ~1000-5000 rows. Help? EDIT* I'm very new to hibernate and below is the best I've come up with. How does this look? for(long id : idList){ //idList just holds the IDs of each Product in my ArrayList Query query = session.createQuery("select product from Product product where product.id = :id"); query.setLong("id", id); for(int i = 0; i < query.list().size(); i++){ listOfConflictingProducts.add((Product) query.list().get(i)); } }

    Read the article

  • Mysql partitioning: Partitions outside of date range is included

    - by Sturlum
    Hi, I have just tried to configure partitions based on date, but it seems that mysql still includes a partition with no relevant data. It will use the relevant partition but also include the oldest for some reason. Am I doing it wrong? The version is 5.1.44 (MyISAM) I first added a few partitions based on "day", which is of type "date" ALTER TABLE ptest PARTITION BY RANGE(TO_DAYS(day)) ( PARTITION p1 VALUES LESS THAN (TO_DAYS('2009-08-01')), PARTITION p2 VALUES LESS THAN (TO_DAYS('2009-11-01')), PARTITION p3 VALUES LESS THAN (TO_DAYS('2010-02-01')), PARTITION p4 VALUES LESS THAN (TO_DAYS('2010-05-01')) ); After a query, I find that it uses the "old" partition, that should not contain any relevant data. mysql> explain partitions select * from ptest where day between '2010-03-11' and '2010-03-12'; +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | ptest | p1,p4 | range | day | day | 3 | NULL | 79 | Using where | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ When I select a single day, it works: mysql> explain partitions select * from ptest where day = '2010-03-11'; +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | 1 | SIMPLE | ptest | p4 | ref | day | day | 3 | const | 39 | | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+

    Read the article

  • Using entity framework to connect to multiple similar tables in .net MVC.

    - by Dite
    A relative newcomer to .net MVC2 and the entity framework, I am working on a project which requires a single web application, (C# .net 4), to connect to multiple different databases depending on the route of access, (ie subdomain). No problem with this in principle and all the logic is written to transform the subdomain into an entity connection and pass this through to the Entity Model. The problem comes with the fact that the different database whilst being largely similar in structure contain 3 or 4 unique tables bespoke to that instance. To my mind there are two ways to solve this issue, neither of which i am sure will be possible. 1/ Use a separate entity model for each database. -Attempts down this route have through up conflicts where table/sp names are the same across differnt db's, or implicit conversion errors when I try and put the different models in different namespaces. or 2/ Overwrite the classes which refer to the changeable database objects based on the value of a base controller property. -I have found nothing to suggest i can even do this. My question is if either of theser routes can ever work in principle or if i should just give up on the EF and connect to the dtabases directlky using ADO. Perhaps there is another way to solve this problem i haven't thought of? Thanks for any help...

    Read the article

  • Unusual heap size limitations in VS2003 C++

    - by Shane MacLaughlin
    I have a C++ app that uses large arrays of data, and have noticed while testing that it is running out of memory, while there is still plenty of memory available. I have reduced the code to a sample test case as follows; void MemTest() { size_t Size = 500*1024*1024; // 512mb if (Size > _HEAP_MAXREQ) TRACE("Invalid Size"); void * mem = malloc(Size); if (mem == NULL) TRACE("allocation failed"); } If I create a new MFC project, include this function, and run it from InitInstance, it works fine in debug mode (memory allocated as expected), yet fails in release mode (malloc returns NULL). Single stepping through release into the C run times, my function gets inlined I get the following // malloc.c void * __cdecl _malloc_base (size_t size) { void *res = _nh_malloc_base(size, _newmode); RTCCALLBACK(_RTC_Allocate_hook, (res, size, 0)); return res; } Calling _nh_malloc_base void * __cdecl _nh_malloc_base (size_t size, int nhFlag) { void * pvReturn; // validate size if (size > _HEAP_MAXREQ) return NULL; ' ' And (size _HEAP_MAXREQ) returns true and hence my memory doesn't get allocated. Putting a watch on size comes back with the exptected 512MB, which suggests the program is linking into a different run-time library with a much smaller _HEAP_MAXREQ. Grepping the VC++ folders for _HEAP_MAXREQ shows the expected 0xFFFFFFE0, so I can't figure out what is happening here. Anyone know of any CRT changes or versions that would cause this problem, or am I missing something way more obvious?

    Read the article

  • Is there a programmatic way to transform a sequence of image files into a PDF?

    - by Salim Fadhley
    I have a sequence of JPG images. Each of the scans is already cropped to the exact size of one page. They are sequential pages of a valuable and out of print book. The publishing application requires that these pages be submitted as a single PDF file. I could take each of these images and just past them into a word-processor (e.g. OpenOffice) - unfortunately the problem here is that it's a very big book and I've got quite a few of these books to get through. It would obviously be time-consuming. This is volunteer work! My second idea was to use LaTeX - I could make a very simple document that consists of nothing more than a series of in-line image includes. I'm sure that this approach could be made to work, it's just a little on the complex side for something which seems like a very simple job. It occurred to me that there must be a simpler way - so any suggestions? I'm on Ubuntu 9.10, my primary programming language is Python, but if the solution is super-simple I'd happily adopt any technology that works.

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • Regex expression is too greedy

    - by alastairs
    I'm writing a regular expression to match data from the IMDb soundtracks data file. My regexes are mostly working, although they are in places slurping too much text into my named groups. Take the following regex for example: "^ Performed by '?(?<performer>.*)('? \(qv\))?$" The performer group includes the string ' (qv) as well as the performer's name. Unfortunately, because the records are not consistently formatted, some performers' names are surrounded by single quotation marks whilst others are not. This means they are optional as far as the regex is concerned. I've tried marking the last group as a greedy group using the ?> group specifier, but this appeared to have no effect on the results. I can improve the results by changing the performer group to match a small range of characters, but this reduces my chances of parsing the name out correctly. Furthermore, if I were to just exclude the apostrophe character, I would then be unable to parse, e.g., band names containing apostrophes, such as Elia's Lonely Friends Band who performed Run For Your Life featured in Resident Evil: Apocalypse.

    Read the article

  • Random forests for short texts

    - by Jasie
    Hi all, I've been reading about Random Forests (1,2) because I think it'd be really cool to be able to classify a set of 1,000 sentences into pre-defined categories. I'm wondering if someone can explain to me the algorithm better, I think the papers are a bit dense. Here's the gist from 1: Overview We assume that the user knows about the construction of single classification trees. Random Forests grows many classification trees. To classify a new object from an input vector, put the input vector down each of the trees in the forest. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes (over all the trees in the forest). Each tree is grown as follows: If the number of cases in the training set is N, sample N cases at random - but with replacement, from the original data. This sample will be the training set for growing the tree. If there are M input variables, a number m « M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing. Each tree is grown to the largest extent possible. There is no pruning. So, does this look right? I'd have N = 1,000 training cases (sentences), M = 100 variables (let's say, there are only 100 unique words across all sentences), so the input vector is a bit vector of length 100 corresponding to each word. I randomly sample N = 1000 cases at random (with replacement) to build trees from. I pick some small number of input variables m « M, let's say 10, to build a tree off of. Do I build tree nodes randomly, using all m input variables? How many classification trees do I build? Thanks for the help!

    Read the article

  • BlackBerry Deployment Strategies

    - by cagreen
    I'm new to large scale BB app deployment and I'm looking for some clarification on the various methods of deployment. Please bear with me as I'm sure there is more to it than my naive view would lead me to believe. My app is very targeted to corporate users and requires a subscription to some additional services before it can be used. In other words, it's not targeted towards the consumer market, so I'm not worried about people not being able to easily find it online. What do I need to be aware of when looking at deployment strategies? Any gotchas? From my understanding my choices are: - App World small upfront vendor fee users can easily search for and find my app billing handled by RIM 4 licensing models (static, single, pool, dynamic). Though I'm not sure I've seen enough info on the pool and dynamic to fully appreciate how it might help me. - Download from my website billing is handled by me can I enforce the number of licenses that are in use within an organization? is this easier/harder for a user? - What else am I missing?

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

  • regexp target last main li in list

    - by veilig
    I need to target the starting tag of the last top level LI in a list that may or may-not contain sublists in various positions - without using CSS or Javascript. Is there a simple/elegant regexp that can help with this? I'm no guru w/ them, but it appears the need for greedy/non-greedy selectors when I'm selecting all the middle text (.*) / (.+) changes as nested lists are added and moved around in the list - and this is throwing me off. $pattern = '/^(<ul>.*)<li>(.+<\/li><\/ul>)$/'; $replacement = '$1<li id="lastLi">$3'; Perhaps there is an easier approach?? converting to XML to target the LI and then convert back? ie: Single Element <ul> <li>TARGET</li> </ul> Multiple Elements <ul> <li>foo</li> <li>TARGET</li> </ul> Nested Lists before end <ul> <li> foo <ul> <li>bar</li> </ul> <li> <li>TARGET</li> </ul> Nested List at end <ul> <li>foo</li> <li> TARGET <ul> <li>bar</li> </ul> </li> </ul>

    Read the article

  • Data structure to build and lookup set of integer ranges

    - by actual
    I have a set of uint32 integers, there may be millions of items in the set. 50-70% of them are consecutive, but in input stream they appear in unpredictable order. I need to: Compress this set into ranges to achieve space efficient representation. Already implemented this using trivial algorithm, since ranges computed only once speed is not important here. After this transformation number of resulting ranges is typically within 5 000-10 000, many of them are single-item, of course. Test membership of some integer, information about specific range in the set is not required. This one must be very fast -- O(1). Was thinking about minimal perfect hash functions, but they do not play well with ranges. Bitsets are very space inefficient. Other structures, like binary trees, has complexity of O(log n), worst thing with them that implementation make many conditional jumps and processor can not predict them well giving poor performance. Is there any data structure or algorithm specialized in integer ranges to solve this task?

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • Linking Apache to Tomcat with multiple domains.

    - by Royce Thigpen
    Okay, so I've been working for a while on this, and have been searching, but so far I have not found any answers that actually answer what I want to know. I'm a little bit at the end of my rope with this one, but I'm hoping I can figure this out sometime soon. So I have Apache 2 installed and serving up standard webpages, but I also have that linked to a Tomcat instance for one of my domains currently supported. However, I want to add another domain to the server via Apache that points to a separate code base from the one I already have. I have been coming at this from several different angles, and I have determined that I just don't know enough about setting up these servers to really do what I want to do. Little information on my server: Currently running a single Tomcat5.5 instance with Apache 2, using mod_jk to connect them together. I have a worker in workers.properties that points it's "host" field to "localhost" with the correct port my Tomcat instance, so that all works. In my Tomcat server.xml file, I have a host defined as "localhost" that points at my webapp that I am currently serving up, with that host set as the defaultHost as well. One thought I had was that I could add a new worker with a different host than "localhost" (i.e. host2) and then define a new host in my server.xml file called "host2" to match it, but after reading around some on the internet, It seems the "host" of the worker must point to a server, and not a hostname in the Tomcat instance, is this correct? Again, a simple rundown of what I want: Setup in apache/tomcat combo such that www.domain1.com points at "webapp1" and www.domain2.com points at "webapp2".

    Read the article

  • OpenCV. cvFnName() works, but cv::FunName() doesn't work

    - by Innuendo
    I'm using OpenCV to write a plugin for a simulator. I've made an OpenCV project (single - not a plugin) and it works fine. When I added OpenCV libs to the plugin project, I added all libs required. Visual Studio 2010 doesn't highlight any code line with red. All looks fine and compiles fine. But in execution, the program halts with a Runtime Error on any cv::function. For example: cv::imread, or cv::imwrite. But if I replace them with cvLoadImage() and cvSaveImage(), it works fine. Why does this happen? I don't want to rewrite the whole script in old-api-style (cvFnName). It means I should change all Mat objects to IplImages, and so on. UPDATE: // preparing template ifstream ifile(tmplfilename); if ( !FILE_LOADED && ifile ) { // loading template file Mat tmpl = cv::imread(tmplfilename, 1); // << here occurs error FILE_LOADED = true; } Mat src; Bmp2Mat(hDC, hBitmap, src); TargetDetector detector(src, tmpl); detector.detectTarget(); If I change to: if ( !FILE_LOADED && ifile ) { IplImage* tmpl = 0; tmpl = cvLoadImage(tmplfilename, 1); // no error occurs } And then no error occurs. Early it displayed some Runtime Error. Now, I wanted to copy exact message and it just crashes the application (simulator, what I am pluginning). It displays window error - to kill process or no. (I can't show exact message, because I'm using russian windows now)

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >