Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 128/151 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • one table is shared between several websites

    - by sami
    I have a static table that's shared by several websites. By static, I mean that the data is read but never updated by the websites. Currently, all websites are served from the same server but that may change. I want to minimize the need for creating/maintaining this table for each of the websites, so I thought about turning it to an xml file that's stored in a shared library that all websites have access to. The problem is I use an ORM and use forign key constraints to ensure integrity of the ids used from that table, so by removing that table out of the MySQL database into an XML file, will this affect the integrity of the ids coming from that table? My table looks like this <table name="entry"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="title" type="VARCHAR" size="500" required="true" /> </table> and I use it as a foreign key in other tables <table name="refer"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="linkto" type="INTEGER"/> <foreign-key foreignTable="entry"> <reference local="linkto" foreign="id" /> </foreign-key> </table> So I'm wondering if I remove that table out of the database, is there a way to retain that referential integrity? And of course are these any other efficient ways to do the same thing? I just don't want to have to repeat that table for several websites.

    Read the article

  • SQL Server 2008 - Keyword search using table Join

    - by Aaron Wagner
    Ok, I created a Stored Procedure that, among other things, is searching 5 columns for a particular keyword. To accomplish this, I have the keywords parameter being split out by a function and returned as a table. Then I do a Left Join on that table, using a LIKE constraint. So, I had this working beautifully, and then all of the sudden it stops working. Now it is returning every row, instead of just the rows it needs. The other caveat, is that if the keyword parameter is empty, it should ignore it. Given what's below, is there A) a glaring mistake, or B) a more efficient way to approach this? Here is what I have currently: ALTER PROCEDURE [dbo].[usp_getOppsPaged] @startRowIndex int, @maximumRows int, @city varchar(100) = NULL, @state char(2) = NULL, @zip varchar(10) = NULL, @classification varchar(15) = NULL, @startDateMin date = NULL, @startDateMax date = NULL, @endDateMin date = NULL, @endDateMax date = NULL, @keywords varchar(400) = NULL AS BEGIN SET NOCOUNT ON; ;WITH Results_CTE AS ( SELECT opportunities.*, organizations.*, departments.dept_name, departments.dept_address, departments.dept_building_name, departments.dept_suite_num, departments.dept_city, departments.dept_state, departments.dept_zip, departments.dept_international_address, departments.dept_phone, departments.dept_website, departments.dept_gen_list, ROW_NUMBER() OVER (ORDER BY opp_id) AS RowNum FROM opportunities JOIN departments ON opportunities.dept_id = departments.dept_id JOIN organizations ON departments.org_id=organizations.org_id LEFT JOIN Split(',',@keywords) AS kw ON (title LIKE '%'+kw.s+'%' OR [description] LIKE '%'+kw.s+'%' OR tasks LIKE '%'+kw.s+'%' OR requirements LIKE '%'+kw.s+'%' OR comments LIKE '%'+kw.s+'%') WHERE ( (@city IS NOT NULL AND (city LIKE '%'+@city+'%' OR dept_city LIKE '%'+@city+'%' OR org_city LIKE '%'+@city+'%')) OR (@state IS NOT NULL AND ([state] = @state OR dept_state = @state OR org_state = @state)) OR (@zip IS NOT NULL AND (zip = @zip OR dept_zip = @zip OR org_zip = @zip)) OR (@classification IS NOT NULL AND (classification LIKE '%'+@classification+'%')) OR ((@startDateMin IS NOT NULL AND @startDateMax IS NOT NULL) AND ([start_date] BETWEEN @startDateMin AND @startDateMax)) OR ((@endDateMin IS NOT NULL AND @endDateMax IS NOT NULL) AND ([end_date] BETWEEN @endDateMin AND @endDateMax)) OR ( (@city IS NULL AND @state IS NULL AND @zip IS NULL AND @classification IS NULL AND @startDateMin IS NULL AND @startDateMax IS NULL AND @endDateMin IS NULL AND @endDateMin IS NULL) ) ) ) SELECT * FROM Results_CTE WHERE RowNum >= @startRowIndex AND RowNum < @startRowIndex + @maximumRows; END

    Read the article

  • Determining polygon intersection and containment

    - by Victor Liu
    I have a set of simple (no holes, no self-intersections) polygons, and I need to check that they don't intersect each other (one can be entirely contained in another; that is okay). I can check this by simply checking the per-vertex inside-ness of one polygon versus other polygons. I also need to determine the containment tree, which is the set of relationships that say which polygon contains any given polygon. Since no polygon can intersect any other, then any contained polygon has a unique container; the "next-bigger" one. In other words, if A contains B contains C, then A is the parent of B, and B is the parent of C, and we don't consider A the parent of C. The question: How do I efficiently determine the containment relationships and check the non-intersection criterion? I ask this as one question because maybe a combined algorithm is more efficient than solving each problem separately. The algorithm should take as input a list of polygons, given by a list of their vertices. It should produce a boolean B indicating if none of the polygons intersect any other polygon, and also if B = true, a list of pairs (P, C) where polygon P is the parent of child C. This is not homework. This is for a hobby project I am working on.

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • help me reason about F# threads

    - by Kevin Cantu
    In goofing around with some F# (via MonoDevelop), I have written a routine which lists files in a directory with one thread: let rec loop (path:string) = Array.append ( path |> Directory.GetFiles ) ( path |> Directory.GetDirectories |> Array.map loop |> Array.concat ) And then an asynchronous version of it: let rec loopPar (path:string) = Array.append ( path |> Directory.GetFiles ) ( let paths = path |> Directory.GetDirectories if paths <> [||] then [| for p in paths -> async { return (loopPar p) } |] |> Async.Parallel |> Async.RunSynchronously |> Array.concat else [||] ) On small directories, the asynchronous version works fine. On bigger directories (e.g. many thousands of directories and files), the asynchronous version seems to hang. What am I missing? I know that creating thousands of threads is never going to be the most efficient solution -- I only have 8 CPUs -- but I am baffled that for larger directories the asynchronous function just doesn't respond (even after a half hour). It doesn't visibly fail, though, which baffles me. Is there a thread pool which is exhausted? How do these threads actually work?

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Are there any libraries for parsing "number expressions" like 1,2-9,33- in Java

    - by mihi
    Hi, I don't think it is hard, just tedious to write: Some small free (as in beer) library where I can put in a String like 1,2-9,33- and it can tell me whether a given number matches that expression. Just like most programs have in their print range dialogs. Special functions for matching odd or even numbers only, or matching every number that is 2 mod 5 (or something like that) would be nice, but not needed. The only operation I have to perform on this list is whether the range contains a given (nonnegative) integer value; more operations like max/min value (if they exist) or an iterator would be nice, of course. What would be needed that it does not occupy lots of RAM if anyone enters 1-10000000 but the only number I will ever query is 12345 :-) (To implement it, I would parse a list into several (min/max/value/mod) pairs, like 1,10,0,1 for 1-10 or 11,33,1,2 for 1-33odd, or 12,62,2,10 for 12-62/10 (i. e. 12, 22, 32, ..., 62) and then check each number for all the intervals. Open intervals by using Integer.MaxValue etc. If there are no libs, any ideas to do it better/more efficient?)

    Read the article

  • IPad SQLite Push and Pull Data from external MS SQL Server DB

    - by MattyD
    This carries on from my previous post (http://stackoverflow.com/questions/4182664/ipad-app-pull-and-push-relational-data). My plan is that when the ipad application starts I am going to pull data (config data i.e. Departments, Types etc etc relational data that is used across the system) from a webhosted MS SQL Server DB via a webservice and populate it into an SQL Lite DB on the IPad. Then when I load a listing I will pull the data over the line again via a webservice and populate it into the SQL Lite db on the ipad (than just run select commands to populate the listing). My questions are: 1. What is the most efficient way to transfer data across the line via the web? Everyone seems to do it a different way. My idea is that I will have a webService for each type of data pull (e.g. RetrieveContactListing) that will query the db and than convert that data into "something" to send across the line. My question really is what is the "something" that it should be converting into? 2. Everyone talks about odata services. Is this suited for applications where complex read and writes are needed? Ive created a simple iphone app before that talked to an sql server db (i just sent my own structured xml across the line) but now with this app the data calls are going to be a lot larger so efficiency is key.

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Data structure to build and lookup set of integer ranges

    - by actual
    I have a set of uint32 integers, there may be millions of items in the set. 50-70% of them are consecutive, but in input stream they appear in unpredictable order. I need to: Compress this set into ranges to achieve space efficient representation. Already implemented this using trivial algorithm, since ranges computed only once speed is not important here. After this transformation number of resulting ranges is typically within 5 000-10 000, many of them are single-item, of course. Test membership of some integer, information about specific range in the set is not required. This one must be very fast -- O(1). Was thinking about minimal perfect hash functions, but they do not play well with ranges. Bitsets are very space inefficient. Other structures, like binary trees, has complexity of O(log n), worst thing with them that implementation make many conditional jumps and processor can not predict them well giving poor performance. Is there any data structure or algorithm specialized in integer ranges to solve this task?

    Read the article

  • design an extendable database model

    - by wishi_
    Hi! Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability". My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level. When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application. Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.

    Read the article

  • C#. How to pass message from unsafe callback to managed code?

    - by maxima120
    Is there a simple example of how to pass messages from unsafe callback to managed code? I have a proprietary dll which receives some messages packed in structs and all is coming to a callback function. The example of usage is as follows but it calls unsafe code too. I want to pass the messages into my application which is all managed code. *P.S. I have no experience in interop or unsafe code. I used to develop in C++ 8 yrs ago but remember very little from that nightmarish times :) P.P.S. The application is loaded as hell, the original devs claim it processes 2mil messages per sec.. I need a most efficient solution.* static unsafe int OnCoreCallback(IntPtr pSys, IntPtr pMsg) { // Alias structure pointers to the pointers passed in. CoreSystem* pCoreSys = (CoreSystem*)pSys; CoreMessage* pCoreMsg = (CoreMessage*)pMsg; // message handler function. if (pCoreMsg->MessageType == Core.MSG_STATUS) OnCoreStatus(pCoreSys, pCoreMsg); // Continue running return (int)Core.CALLBACKRETURN_CONTINUE; } Thank you.

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • How do I detect proximity of the mouse pointer to a line in Flex?

    - by Hanno Fietz
    I'm working on a charting UI in Flex. One of the features I want to implement is "snapping" of the mousepointer to the data points in the diagram. I. e., if the user hovers the mouse pointer over a line diagram and gets close to the data point, I want the pointer to move to the exact coordinates and show a marker, like this: Currently, the lines are drawn on a Shape, using the Graphics API. The Shape is a child DisplayObject of a custom UIComponent subclass with the exact same dimensions. This means, I already get mouseOver events on the parent of the diagram's canvas. Now I need a way to detect if the pointer is close to one of the data points. I. e. I need an answer to the question "Which data points lie within a radius of x pixels from my current position and which of them is closest?" upon each move of the mouse. I can think of the following possibilities: draw the lines not as simple lines in the graphics API, but as more advanced objects that can have their own mouseOver events. However, I want the snapping to trigger before the mouse is actually over the line. check the original data for possible candidates upon each mouse movement. Using binary search, I might be able to reduce the number of items I have to compare sufficently. prepare some kind of new data structure from the raw data that makes the above search more efficient. I don't know how that would look like. I'm guessing this is a pretty standard problem for a number of applications, but probably the actual code usually is inside of some framework. Is there anything I can read about this topic?

    Read the article

  • Titanium TableViewRow classname with custom rows

    - by pancake
    I would like to know in what way the 'className' property of a Ti.UI.TableViewRow helps when creating custom rows. For example, I populate a tableview with custom rows in the following way: function populateTableView(tableView, data) { var rows = []; var row; var title, image; var i; for (i = 0; i < data.length; i++) { title = Ti.UI.createLabel({ text : data[i].title, width : 100, height: 30, top: 5, left: 25 }); image = Ti.UI.createImage({ image : 'some_image.png', width: 30, height: 30, top: 5, left: 5 }); /* and, like, 5+ more views or whatever */ row = Ti.UI.createTableViewRow(); row.add(titleLabel); row.add(image); rows.push(row); } tableView.setData(rows); } Of course, this example of a "custom" row is easily created using the standard title and image properties of the TableViewRow, but that isn't the point. How is the allocation of new labels, image views and other child views of a table view prevented in favour of their reuse? I know in iOS this is achieved by using the method -[UITableView dequeueReusableCellWithIdentifier:] to fetch a row object from a 'reservoir' (so 'className' is 'identifier' here) that isn't currently being used for displaying data, but already has the needed child views laid out correctly in it, thus only requiring to update the data contained within (text, image data, etc). As this system is so unbelievably simple, I have a lot of trouble believing the method employed by the Titanium API does not support this. After reading through the API and searching the web, I do however suspect this is the case. The 'className' property is recommended as an easy way to make table views more efficient in Titanium, but its relation to custom table view rows is not explained in any way. If anyone could clarify this matter for me, I would be very grateful.

    Read the article

  • Deserialize xml which uses attribute name/value pairs

    - by Bodyloss
    My application receives a constant stream of xml files which are more or less a direct copy of the database record <record type="update"> <field name="id">987654321</field> <field name="user_id">4321</field> <field name="updated">2011-11-24 13:43:23</field> </record> And I need to deserialize this into a class which provides nullable property's for all columns class Record { public long? Id { get; set; } public long? UserId { get; set; } public DateTime? Updated { get; set; } } I just cant seem to work out a method of doing this without having to parse the xml file manually and switch on the field's name attribute to store the values. Is their a way this can be achieved quickly using an XmlSerializer? And if not is their a more efficient way of parsing it manually? Regards and thanks My main problem is that the attribute name needs to have its value set to a property name and its value as the contents of a <field>..</field> element

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • Invoking a method overloaded where all arguments implement the same interface

    - by double07
    Hello, My starting point is the following: - I have a method, transform, which I overloaded to behave differently depending on the type of arguments that are passed in (see transform(A a1, A a2) and transform(A a1, B b) in my example below) - All these arguments implement the same interface, X I would like to apply that transform method on various objects all implementing the X interface. What I came up with was to implement transform(X x1, X x2), which checks for the instance of each object before applying the relevant variant of my transform. Though it works, the code seems ugly and I am also concerned of the performance overhead for evaluating these various instanceof and casting. Is that transform the best I can do in Java or is there a more elegant and/or efficient way of achieving the same behavior? Below is a trivial, working example printing out BA. I am looking for examples on how to improve that code. In my real code, I have naturally more implementations of 'transform' and none are trivial like below. public class A implements X { } public class B implements X { } interface X { } public A transform(A a1, A a2) { System.out.print("A"); return a2; } public A transform(A a1, B b) { System.out.print("B"); return a1; } // Isn't there something better than the code below??? public X transform(X x1, X x2) { if ((x1 instanceof A) && (x2 instanceof A)) { return transform((A) x1, (A) x2); } else if ((x1 instanceof A) && (x2 instanceof B)) { return transform((A) x1, (B) x2); } else { throw new RuntimeException("Transform not implemented for " + x1.getClass() + "," + x2.getClass()); } } @Test public void trivial() { X x1 = new A(); X x2 = new B(); X result = transform(x1, x2); transform(x1, result); }

    Read the article

  • Refactoring - Speed increase

    - by Michael G
    How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls. and example of the times are 71.55% CreateTableFromReportDataColumns - 18, 533* ms - 190 calls -- 55.71% MoveNext - 14,422ms - 10,775 calls What can I do to speed this method up? it gets called a lot, and the seconds add up: private static DataTable CreateTableFromReportDataColumns(Report report) { DataTable table = new DataTable(); HashSet<String> colsToAdd = new HashSet<String> { "DataStream" }; foreach (ReportData reportData in report.ReportDatas) { IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct(); foreach (var s in cols) { if (!String.IsNullOrEmpty(s)) colsToAdd.Add(s); } } foreach (string col in colsToAdd) { table.Columns.Add(col); } return table; }

    Read the article

  • N-gram split function for string similarity comparison

    - by Michael
    As part of excersise to better understand F# which I am currently learning , I wrote function to split given string into n-grams. 1) I would like to receive feedback about my function : can this be written simpler or in more efficient way? 2) My overall goal is to write function that returns string similarity (on 0.0 .. 1.0 scale) based on n-gram similarity; Does this approach works well for short strings comparisons , or can this method reliably be used to compare large strings (like articles for example). 3) I am aware of the fact that n-gram comparisons ignore context of two strings. What method would you suggest to accomplish my goal? //s:string - target string to split into n-grams //n:int - n-gram size to split string into let ngram_split (s:string, n:int) = let ngram_count = s.Length - (s.Length % n) let ngram_list = List.init ngram_count (fun i -> if( i + n >= s.Length ) then s.Substring(i,s.Length - i) + String.init ((i + n) - s.Length) (fun i -> "#") else s.Substring(i,n) ) let ngram_array_unique = ngram_list |> Seq.ofList |> Seq.distinct |> Array.ofSeq //produce tuples of ngrams (ngram string,how much occurrences in original string) Seq.init ngram_array_unique.Length (fun i -> (ngram_array_unique.[i], ngram_list |> List.filter(fun item -> item = ngram_array_unique.[i]) |> List.length) )

    Read the article

  • Fast serarch of 2 dimensional array

    - by Tim
    I need a method of quickly searching a large 2 dimensional array. I extract the array from Excel, so 1 dimension represents the rows and the second the columns. I wish to obtain a list of the rows where the columns match certain criteria. I need to know the row number (or index of the array). For example, if I extract a range from excel. I may need to find all rows where column A =”dog” and column B = 7 and column J “a”. I only know which columns and which value to find at run time, so I can’t hard code the column index. I could use a simple loop, but is this efficient ? I need to run it several thousand times, searching for different criteria each time. For r As Integer = 0 To UBound(myArray, 0) - 1 match = True For c = 0 To UBound(myArray, 1) - 1 If not doesValueMeetCriteria(myarray(r,c) then match = False Exit For End If Next If match Then addRowToMatchedRows(r) Next The doesValueMeetCriteria function is a simple function that checks the value of the array element against the query requirement. e.g. Column A = dog etc. Is it more effiecent to create a datatable from the array and use the .select method ? Can I use Linq in some way ? Perhaps some form of dictionary or hashtable ? Or is the simple loop the most effiecent ? Your suggestions are most welcome.

    Read the article

  • Internal loop only runs once, containing loop runs endlessly

    - by Mark
    noob question I'm afraid. I have a loop that runs and rotates the hand of a clock and an internal loop that checks the angle of the hand if it is 90, 180, 270 and 360. On these 4 angles the corresponding div is displayed and its siblings removed. The hand loops and loops eternally, which is what I want, but the angle check only runs the loop once through the whole 360. As the hand passes through the angles it is correctly displaying and removing divs but is doesn't continue after the first revolution of the clock. I've obviously messed up somewhere and there is bound to be a more efficient way of doing all this. I am using jQueryRotate.js for my rotations. Thanks for your time. jQuery(document).ready(function(){ var angle = 0; setInterval(function(){ jQuery("#hand").rotate(angle); function movehand(){ if (angle == 90) { jQuery("#intervention").fadeIn().css("display","block").siblings().css("display","none"); } else if (angle == 180) { jQuery("#management").fadeIn().css("display","block").siblings().css("display","none"); } else if (angle == 270) { jQuery("#prevention").fadeIn().css("display","block").siblings().css("display","none"); } else if (angle == 360) { jQuery("#reaction").fadeIn().css("display","block").siblings().css("display","none"); } else {movehand;} }; movehand(); angle+=1; },10); });

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >