Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 620/672 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • How safe is my safe rethrow?

    - by gustafc
    (Late edit: This question will hopefully be obsolete when Java 7 comes, because of the "final rethrow" feature which seems like it will be added.) Quite often, I find myself in situations looking like this: do some initialization try { do some work } catch any exception { undo initialization rethrow exception } In C# you can do it like this: InitializeStuff(); try { DoSomeWork(); } catch { UndoInitialize(); throw; } For Java, there's no good substitution, and since the proposal for improved exception handling was cut from Java 7, it looks like it'll take at best several years until we get something like it. Thus, I decided to roll my own: (Edit: Half a year later, final rethrow is back, or so it seems.) public final class Rethrow { private Rethrow() { throw new AssertionError("uninstantiable"); } /** Rethrows t if it is an unchecked exception. */ public static void unchecked(Throwable t) { if (t instanceof Error) throw (Error) t; if (t instanceof RuntimeException) throw (RuntimeException) t; } /** Rethrows t if it is an unchecked exception or an instance of E. */ public static <E extends Exception> void instanceOrUnchecked( Class<E> exceptionClass, Throwable t) throws E, Error, RuntimeException { Rethrow.unchecked(t); if (exceptionClass.isInstance(t)) throw exceptionClass.cast(t); } } Typical usage: public void doStuff() throws SomeException { initializeStuff(); try { doSomeWork(); } catch (Throwable t) { undoInitialize(); Rethrow.instanceOrUnchecked(SomeException.class, t); // We shouldn't get past the above line as only unchecked or // SomeException exceptions are thrown in the try block, but // we don't want to risk swallowing an error, so: throw new SomeException("Unexpected exception", t); } private void doSomeWork() throws SomeException { ... } } It's a bit wordy, catching Throwable is usually frowned upon, I'm not really happy at using reflection just to rethrow an exception, and I always feel a bit uneasy writing "this will not happen" comments, but in practice it works well (or seems to, at least). What I wonder is: Do I have any flaws in my rethrow helper methods? Some corner cases I've missed? (I know that the Throwable may have been caused by something so severe that my undoInitialize will fail, but that's OK.) Has someone already invented this? I looked at Commons Lang's ExceptionUtils but that does other things. Edit: finally is not the droid I'm looking for. I'm only interested to do stuff when an exception is thrown. Yes, I know catching Throwable is a big no-no, but I think it's the lesser evil here compared to having three catch clauses (for Error, RuntimeException and SomeException, respectively) with identical code. Note that I'm not trying to suppress any errors - the idea is that any exceptions thrown in the try block will continue to bubble up through the call stack as soon as I've rewinded a few things.

    Read the article

  • Square Peg Web: Gets you the traffic to where it matters most: Your Website!

    - by demetriusalwyn
    Have you decided to start your business online or is your business not reaching the targeted audience? Come to Square Peg Web; where you will find what you want to make your business reach new heights. The team at Square Peg Web is professionals who understand what you want and make sure you get it right. Our confidence stems from the fact of thousands of satisfied clients who keep referring friends and business associates to us and we do not let our clients down. Many companies promise the sky but how far is does their work live up to the promises? We do not know about the others however, we are sure that we strive to put together all our ideas and thoughts to make your website rank among the top. Web hosting is something that needs to have a personal touch; Square Peg Web customizes everything to suit your requirements so that you do not have to look further. With Square Peg Web you have a host of features to make your Business go viral. Some of the product details that are offered with Square Peg Web are unlimited product options/ variants/ properties giving you an option on price modifiers. You get unlimited customized input fields for your products and you can also Customer-define the prices. Square Peg Web provides you an option of using multiple product images with zoom features and one can also list a particular product in several categories. There are other aspects which make Square Peg Web the best choice for your website needs; every sale of yours’ is important to you and to us. We make sure that each sale is tracked by the product and also the list of bestsellers that appeal to the audience. Other comprehensive statistics of Square Peg Web includes searchable order data, an interface for shipments and order fulfillments, export sales & customer data for usage in a spreadsheet and the ability to export orders to QuickBooks format. With Square Peg Web; Admin Panel is a lot simpler. Administrative access is completely password protected and any changes done are all in real-time. You can have absolute control on the cart from anywhere around the world using your web browser and the topping on the cake is the unlimited amount of admin accounts that can be created for you. Square Peg Web offers you a world of experience with the options of choosing from marketing websites to e-commerce and from customized applications to community oriented sites. Some of the projects which appear in the portfolio of Square Peg Web are Online Marketing Web Sites, E-Commerce Web Sites, customized web applications, Blog designing and programming, video sharing and the option of downloading web sites, online advertisements, flash animation, customer and product support web sites, web site re-designing and planning and complete information architecture.

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Failed to obtain JDBC Driver for MySQL under Tomcat environment

    - by Michael Mao
    Hi all: I've been trying to obtain the Driver class for JDBC connection to MySQL. The workstation is running on Linux, Fedora 10. I have manually set up the classpath variable for Java by CLI like this: bash-3.2$ echo $CLASSPATH /home/cmao/public_html/jsp/mysql-connector-java-5.1.12-bin.jar This shows that I've added the lastest mysql connection jar archive to my CLASSPATH variable. I've created a test JSP page which can be found here And source code for this page is: <%@page language="java"%> <%@page import="java.sql.*"%> <%@page import="java.util.*"%> <html> <head> <title>UTS JDBC MySQL connection test page</title> </head> <body> <% Connection con = null; out.print("Java version is : " + System.getProperty("java.version") + "<br />"); out.print("Tomcat version is : " + application.getServerInfo() + "<br />"); out.print("Servlet version is: " + application.getMajorVersion() + "<br />"); out.print("JSP version is : " + JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersion() +"<br />"); //out.print("Java classpath is : " + System.getProperty("java.class.path")+ "<br />"); //out.print("JSP classpath is : " + appliaction.getAttribute("org.apache.catalina.jsp_classpath") + "<br />"); //out.print("Tomcat classpath is : " + System.getProperty("org.apache.tomcat.common.classpath") + "<br />"); try { Class c = Class.forName("com.mysql.jdbc.Driver"); } catch(Exception e) { out.println("Error! Failed to obtain JDBC driver for MySQL... Missing class \"com.mysql.jdbc.Driver\"<br />"); } %> </body> </html> None of those commented out line would work, various Jsper Expetions would be thrown. You can check those Error pages from the following links: classpath Error page catalina Error page tomcat Error page It seems, from my limited knowledge of JSP and Servlet, the Tomcat environment "ignores" my Java CLASSPATH? In which case I cannot configure the MySQL JDBC package to let my Servlets(a JSP is but a Servlet anyway) work. I am not sure how to fix this issue. would it be better if I use an IDE like Eclipse or NetBeans and create a real Java "web app" so that everything can be "self-configured" by the usage of a web.config XML configuration file? So that I can certainly bypass this Tomcat environment restriction? Many thanks for the suggestions in advance.

    Read the article

  • deleted gen folder, eclipse isn't generating it now :(

    - by LuxuryMode
    I accidentally deleted my gen folder and now, predictably, my resources are all messed up. I just created a gen folder myself and tried to project clean - that didn't work. Tried right-clicking project and going to android tools fix project properties - didn't work. Tried unchecking build automatically...didn't work. cleaned, closed project, closed eclipse, restarted, etc, etc. Nothing is working and I keep seeing this error: gen already exists but is not a source folder. Convert to a source folder or rename it. EDIT - OK was able to generate R.java, but now I'm getting crazy stuff in the console: [2011-06-14 17:06:11 - fastapp] Conversion to Dalvik format failed with error 1 [2011-06-14 17:06:42 - fastapp] Dx trouble processing "java/awt/font/NumericShaper.class": Ill-advised or mistaken usage of a core class (java.* or javax.*) when not building a core library. This is often due to inadvertently including a core library file in your application's project, when using an IDE (such as Eclipse). If you are sure you're not intentionally defining a core class, then this is the most likely explanation of what's going on. However, you might actually be trying to define a class in a core namespace, the source of which you may have taken, for example, from a non-Android virtual machine project. This will most assuredly not work. At a minimum, it jeopardizes the compatibility of your app with future versions of the platform. It is also often of questionable legality. If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine distribution, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message. If you go ahead and use "--core-library" but are in fact building an application, then be forewarned that your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem. If you are legitimately using some code that happens to be in a core package, then the easiest safe alternative you have is to repackage that code. That is, move the classes in question into your own package namespace. This means that they will never be in conflict with core system classes. JarJar is a tool that may help you in this endeavor. If you find that you cannot do this, then that is an indication that the path you are on will ultimately lead to pain, suffering, grief, and lamentation. [2011-06-14 17:06:42 - fastapp] Dx 1 error; aborting [2011-06-14 17:06:42 - fastapp] Conversion to Dalvik format failed with error 1 And eclipse can't resolve the import of my resources import com.me.fastapp.R;

    Read the article

  • pthread and recursively calling execvp in C

    - by eduke
    To begin I'm sorry for my english :) I looking for a way to create a thread each time my program finds a directory, in order to call the program itself but with a new argv[2] argument (which is the current dir). I did it successfully with fork() but with pthread I've some difficulties. I don't know if I can do something like that : #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <dirent.h> int main(int argc, char **argv) { pthread_t threadID[10] = {0}; DIR * dir; struct dirent * entry; struct stat status; pthread_attr_t attr; pthread_attr_init(&attr); int i = 0; char *res; char *tmp; char *file; if(argc != 3) { printf("Usage : %s <file> <dir>\n", argv[0]); exit(EXIT_FAILURE); } if(stat(argv[2],&status) == 0) { dir = opendir(argv[2]); file = argv[1]; } else exit(EXIT_FAILURE); while ((entry = readdir(dir))) { if (strcmp(entry->d_name, ".") && strcmp(entry->d_name, "..")) { tmp = malloc(strlen(argv[2]) + strlen(entry->d_name) + 2); strcpy(tmp, argv[2]); strcat(tmp, "/"); strcat(tmp, entry->d_name); stat(tmp, &status); if (S_ISDIR(status.st_mode)) { argv[2] = tmp; pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); printf("New thread created : %d", i); i++; } else if (!strcmp(entry->d_name, file)) { printf(" %s was found - Thread number = %d\n",tmp, i); break; } free(tmp); } } pthread_join( threadID[i] , &res ); exit(EXIT_SUCCESS); } Actually it doesn't works : pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); I have no runtime error, but when the file to find is in another directory, the thread is not created and so execvp(argv[0], argv) is not called... Thank you for you help, Simon

    Read the article

  • Regular expression does not find the first occurrence

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • Idiomatic use of auto_ptr to transfer ownership to a container

    - by heycam
    I'm refreshing my C++ knowledge after not having used it in anger for a number of years. In writing some code to implement some data structure for practice, I wanted to make sure that my code was exception safe. So I've tried to use std::auto_ptrs in what I think is an appropriate way. Simplifying somewhat, this is what I have: class Tree { public: ~Tree() { /* delete all Node*s in the tree */ } void insert(const string& to_insert); ... private: struct Node { ... vector<Node*> m_children; }; Node* m_root; }; template<T> void push_back(vector<T*>& v, auto_ptr<T> x) { v.push_back(x.get()); x.release(); } void Tree::insert(const string& to_insert) { Node* n = ...; // find where to insert the new node ... push_back(n->m_children, auto_ptr<Node>(new Node(to_insert)); ... } So I'm wrapping the function that would put the pointer into the container, vector::push_back, and relying on the by-value auto_ptr argument to ensure that the Node* is deleted if the vector resize fails. Is this an idiomatic use of auto_ptr to save a bit of boilerplate in my Tree::insert? Any improvements you can suggest? Otherwise I'd have to have something like: Node* n = ...; // find where to insert the new node auto_ptr<Node> new_node(new Node(to_insert)); n->m_children.push_back(new_node.get()); new_node.release(); which kind of clutters up what would have been a single line of code if I wasn't worrying about exception safety and a memory leak. (Actually I was wondering if I could post my whole code sample (about 300 lines) and ask people to critique it for idiomatic C++ usage in general, but I'm not sure whether that kind of question is appropriate on stackoverflow.)

    Read the article

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • Regular expression does not find the first occurance

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Parse lines of integers in C

    - by Jérôme
    This is a classical problem, but I can not find a simple solution. I have an input file like: 1 3 9 13 23 25 34 36 38 40 52 54 59 2 3 9 14 23 26 34 36 39 40 52 55 59 63 67 76 85 86 90 93 99 108 114 2 4 9 15 23 27 34 36 63 67 76 85 86 90 93 99 108 115 1 25 34 36 38 41 52 54 59 63 67 76 85 86 90 93 98 107 113 2 3 9 16 24 28 2 3 10 14 23 26 34 36 39 41 52 55 59 63 67 76 Lines of different number of integers separated by a space. I would like to parse them in an array, and separate each line with a marker, let say -1. The difficulty is that I must handle integers and line returns. Here my existing code, it loops upon the scanf loop (because scanf can not begin at a given position). #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { if (argc != 4) { fprintf(stderr, "Usage: %s <data file> <nb transactions> <nb items>\n", argv[0]); return 1; } FILE * file; file = fopen (argv[1],"r"); if (file==NULL) { fprintf(stderr, "Error: can not open %s\n", argv[1]); fclose(file); return 1; } int nb_trans = atoi(argv[2]); int nb_items = atoi(argv[3]); int *bdd = malloc(sizeof(int) * (nb_trans + nb_items)); char line[1024]; int i = 0; while ( fgets(line, 1024, file) ) { int item; while ( sscanf (line, "%d ", &item )){ printf("%s %d %d\n", line, i, item); bdd[i++] = item; } bdd[i++] = -1; } for ( i = 0; i < nb_trans + nb_items; i++ ) { printf("%d ", bdd[i]); } printf("\n"); }

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • What would you suggest as a high school first language?

    - by ldigas
    Edit by OA: After reading some answers I'll just update the question a little. At first I put it a little bluntly, but some of those gave me some good arguments which have to be taken into consideration while making a stand on this one. (these are mostly picked up from comments and answers below). A few things to take into account: to many pupils this is a first programming language - at this stage most of them have trouble grasping a difference between data types, variable passing, ... and whatnot, less alone pointers and similar 'low level stuff' :) they will all have to pass this to get into next grade (well, big majority of them anyway) not all of them have computers at home, not all of them are willing to learn this, less alone interested in - so the concepts have to be taught on a finite time scale in school hours (as well as practice on computers) free literature is a bonus - the teacher will make some scripts and handaways, but still ... I wouldn't like to bear the parents with the burden of buying expensive literature (also, english is not a native language here ... and although they are all learning it, their ability to read it fluently is somewhat questionable) somebody gave an argument - "a language which does not get in the way of ideas" - good one accessibility on different platforms in not expecially important at this point - although most of the suggested ones are available on win as well as linux - not many macs in this part of europe (their prices are sky high for anything but specialised usage) I will check what are the licencing issues on ms express editions about using it massively in high schools for purposes like this - if someone has any info about this, please, do not be shy with it :) A friend of mine, informatics teacher - in EU it comes as something as junior cs teacher, in a local high school asked me what I thought about what should be the first language pupils should be taught? It is a technical school (a little more oriented towards mathematics than the gymnasium, but not computer oriented totally). So I'm asking you - what do you think should be the first language pupils are exposed to in highschool? They have been teaching Pascal so far, but she's not sure that's a good course. She thought about switching to C (which I resented; considering not all pupils have interests in programming, to start with, and should be taught something higher level since they are just gripping the idea of a loop and such ... for a start), I suggested python or ruby (preferably py since it handles all paradigms). What is your opinion on this one? I looked, but didn't find a similar question on SO, so if there is one, please just point me towards it. Edit: The assumption is that none of the pupils have been exposed to any programming in junior school. See also: What is the best way to teach young kids some basic programming concepts? Best ways to teach a beginner to program How and when do you teach a kid to code What is the easiest language to start with? High School Programming

    Read the article

  • Why does my Perl regular expression only find the last occurrence?

    - by scharan
    I have the following input to a Perl script and I wish to get the first occurrence of NAME="..." strings in each of the <table>...</table> structures. The entire file is read into a single string and the regex acts on that input. However, the regex always returns the last occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • ReadFile doesn't work asynchronously on Win7 and Win2k8

    - by f0b0s
    According to MSDN ReadFile can read data 2 different ways: synchronously and asynchronously. I need the second one. The folowing code demonstrates usage with OVERLAPPED struct: #include <windows.h> #include <stdio.h> #include <time.h> void Read() { HANDLE hFile = CreateFileA("c:\\1.avi", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if ( hFile == INVALID_HANDLE_VALUE ) { printf("Failed to open the file\n"); return; } int dataSize = 256 * 1024 * 1024; char* data = (char*)malloc(dataSize); memset(data, 0xFF, dataSize); OVERLAPPED overlapped; memset(&overlapped, 0, sizeof(overlapped)); printf("reading: %d\n", time(NULL)); BOOL result = ReadFile(hFile, data, dataSize, NULL, &overlapped); printf("sent: %d\n", time(NULL)); DWORD bytesRead; result = GetOverlappedResult(hFile, &overlapped, &bytesRead, TRUE); // wait until completion - returns immediately printf("done: %d\n", time(NULL)); CloseHandle(hFile); } int main() { Read(); } On Windows XP output is: reading: 1296651896 sent: 1296651896 done: 1296651899 It means that ReadFile didn't block and returned imediatly at the same second, whereas reading process continued for 3 seconds. It is normal async reading. But on windows 7 and windows 2008 I get following results: reading: 1296661205 sent: 1296661209 done: 1296661209. It is a behavior of sync reading. MSDN says that async ReadFile sometimes can behave as sync (when the file is compressed or encrypted for example). But the return value in this situation should be TRUE and GetLastError() == NO_ERROR. On Windows 7 I get FALSE and GetLastError() == ERROR_IO_PENDING. So WinApi tells me that it is an async call, but when I look at the test I see that it is not! I'm not the only one who found this "bug": read the comment on ReadFile MSDN page. So what's the solution? Does anybody know? It is been 14 months after Denis found this strange behavior.

    Read the article

  • How to Convert arrays or SimpleXML-Objects into an XML-String

    - by streetparade
    I want to create a xml from a given string, i have a function but i didn't wrote it.It seems a bit cryptical too. Can please some one review it and give me some Ideas, how it could be written clearer for everybody? /** * Converts arrays or SimpleXML-Objects into an XML-String * @params mixed Accepts an array or xml string with data to Post * @params integer DO NOT PROVIDE. Internal Usage for recursion only */ private function mixedDataToXML($data, $level = 1) { if(!$data){ return FALSE; } if(is_array($data)) { $xml = ''; if ($level==1) { $xml .= '<?xml version="1.0" encoding="ISO-8859-1"?>'."\n"; } foreach ($data as $key => $value) { $key = strtolower($key); if (is_array($value)) { $multi_tags = false; foreach($value as $key2=>$value2) { if (is_array($value2)) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value2, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; $multi_tags = true; } else { if (trim($value2)!='') { if (htmlspecialchars($value2)!=$value2) { $xml .= str_repeat("\t",$level). "<$key><![CDATA[$value2]]>". "</$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value2</$key>\n"; } } $multi_tags = true; } } if (!$multi_tags and count($value)>0) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; } } else { if (trim($value)!='') { if (htmlspecialchars($value)!=$value) { $xml .= str_repeat("\t",$level)."<$key>". "<![CDATA[$value]]></$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value</$key>\n"; } } } } return $xml; }else{ return (string)$data; } }

    Read the article

  • Java - is this an idiom or pattern, behavior classes with no state

    - by Berlin Brown
    I am trying to incorporate more functional programming idioms into my java development. One pattern that I like the most and avoids side effects is building classes that have behavior but they don't necessarily have any state. The behavior is locked into the methods but they only act on the parameters passed in. The code below is code I am trying to avoid: public class BadObject { private Map<String, String> data = new HashMap<String, String>(); public BadObject() { data.put("data", "data"); } /** * Act on the data class. But this is bad because we can't * rely on the integrity of the object's state. */ public void execute() { data.get("data").toString(); } } The code below is nothing special but I am acting on the parameters and state is contained within that class. We still may run into issues with this class but that is an issue with the method and the state of the data, we can address issues in the routine as opposed to not trusting the entire object. Is this some form of idiom? Is this similar to any pattern that you use? public class SemiStatefulOOP { /** * Private class implies that I can access the members of the <code>Data</code> class * within the <code>SemiStatefulOOP</code> class and I can also access * the getData method from some other class. * * @see Test1 * */ class Data { protected int counter = 0; public int getData() { return counter; } public String toString() { return Integer.toString(counter); } } /** * Act on the data class. */ public void execute(final Data data) { data.counter++; } /** * Act on the data class. */ public void updateStateWithCallToService(final Data data) { data.counter++; } /** * Similar to CLOS (Common Lisp Object System) make instance. */ public Data makeInstance() { return new Data(); } } // End of Class // Issues with the code above: I wanted to declare the Data class private, but then I can't really reference it outside of the class: I can't override the SemiStateful class and access the private members. Usage: final SemiStatefulOOP someObject = new SemiStatefulOOP(); final SemiStatefulOOP.Data data = someObject.makeInstance(); someObject.execute(data); someObject.updateStateWithCallToService(data);

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

  • C++ DLL creation for C# project - No functions exported

    - by Yeti
    I am working on a project that requires some image processing. The front end of the program is C# (cause the guys thought it is a lot simpler to make the UI in it). However, as the image processing part needs a lot of CPU juice I am making this part in C++. The idea is to link it to the C# project and just call a function from a DLL to make the image processing part and allow to the C# environment to process the data afterwards. Now the only problem is that it seems I am not able to make the DLL. Simply put the compiler refuses to put any function into the DLL that I compile. Because the project requires some development time testing I have created two projects into a C++ solution. One is for the Dll and another console application. The console project holds all the files and I just include the corresponding header into my DLL project file. I thought the compiler should take out the functions that I marked as to be exported and make the DLL from them. Nevertheless this does not happens. Here it is how I defined the function in the header: extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck); extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI, CvScalar &refHSVColorLow, CvScalar &refHSVColorHi ); Followed by the implementation in the cpp file: extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI,&refHSVColorLow, CvScalar &refHSVColorHi ) { \\... return cvPoint((int)( M10/M00) + imgROI.x, (int)( M01/M00 ) + imgROI.y) ;} extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck) { \\ ...}; And my main file for the DLL project looks like: #ifdef _MANAGED #pragma managed(push, off) #endif /// <summary> Include files. </summary> #include "..\ImageProcessingDebug\ImageProcessingTest.h" #include "..\ImageProcessingDebug\ImageProcessing.h" BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { return TRUE; } #ifdef _MANAGED #pragma managed(pop) #endif Needless to say it does not work. A quick look with DLL export viewer 1.36 reveals that no function is inside the library. I don't get it. What I am doing wrong ? As side not I am using the C++ objects (and here it is the C++ DLL part) such as the vector. However, only for internal usage. These will not appear in the headers of either function as you can observe from the previous code snippets. Any ideas? Thx, Bernat

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >