Search Results

Search found 1155 results on 47 pages for 'relational algebra'.

Page 6/47 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Vector transform equation explanation

    - by cyberdemon
    I'm trying to understand the maths of moving points in a 3d space by making a game written in C#. I'm looking at this wolfire blog series which explains some basic 3d maths. I've read the first two parts but am stuck on the 3rd. I know it's all really rudimentary stuff but I find Googling for help with equations really hard. The one I'm struggling with is: 0*(0.66,0.75) + 2*(-0.75, 0.66) = (-1.5, 1.3) How can anything multiplied by 0 not be 0? So my question is how does this look in code: x(a,b) + y(c,d) I know it's basic stuff but I just can't see it.

    Read the article

  • Relational Clausal Logic question: what is a Herbrand interpretation

    - by anotherstat
    I'm having a hard time coming to grips with relational clausal logic, and I'm not sure if this is the place to ask but it would be help me so much with revision if anyone could provide guidance with the following questions. Let P be the program: academic(X); student(X); other_staff(X):- works_in(X, university). :-student(john). :-other_staff(john). works_in(john, university) Question: Which are the Herbrand interpreations of P? AS

    Read the article

  • php doctrine relational form naming for saving data

    - by neziric
    is it possible to have an html form organized/named in such a way that $foo-fromArray($_POST) would actually save relational data as well? example: html_form_fields: user_name country_name db_table_users: id user_name db_table_countries: id country_name update: forgot to say i'm trying to make this with zend framework forms

    Read the article

  • Lightweight Relational database for BlackBerry OS 4.7

    - by Pavel
    Hey guys, I'm writing an app for BlackBerry OS 4.7 and would greatly benefit from having a lightweight relational database such as SQLite that my application can use to store data locally on the device. SQLite is coming out with 5.0, which is still in beta. Can anyone recommend any other alternatives that permit commercial use? Additional information: - Concurrent access not required - Transactions not required Thanks in advance :-)

    Read the article

  • Object Oriented vs Relational Databases

    - by Dan
    Objects oriented databases seem like a really cool idea to me, no need to worry about mapping your domain model to your database model, no messing around with sql or ORM tools. The way I understand it, relational DBs offer some advantages when there is massive amounts of data, and searching an indexing need to be done. To my mind 99% of websites are not massive, and enterprise issues never need to be thought about, so why arn't OO DBs more widely used?

    Read the article

  • is there a formal algebra method to analyze programs?

    - by Gabriel
    Is there a formal/academic connection between an imperative program and algebra, and if so where would I learn about it? The example I'm thinking of is: if(C1) { A1(); A2(); } if(C2) { A1(); A2(); } Represented as a sum of terms: (C1)(A1) + (C1)(A2) + (C2)(A1) + (C2)(A2) = (C1+C2)(A1+A2) The idea being that manipulation could lead to programatic refactoring - "factoring" being the common concept in this example.

    Read the article

  • Creating an object relational schema from a Class diagram

    - by Caylem
    Hi Ladies and Gents. I'd like some help converting the following UML diagram: UML Diagram The diagram shows 4 classes and is related to a Loyalty card scheme for an imaginary supermarket. I'd like to create an object relational data base schema from it for use with Oracle 10g/11g. Not sure where to begin, if somebody could give me a head start that would be great. Looking for actually starting the schema, show abstraction, constraints, types(subtypes, supertypes) methods and functions. Note: I'm not looking for anyone to make any comments regarding the actual classes and whether changes should be made to the Diagram, just the schema. Thanks

    Read the article

  • Import de-normalized relational data from Excel into SQL Server

    - by roryf
    I need to import data from an Excel spreadsheet into SQL Server, but the data isn't in a relational/normalized format so the import wizard isn't going to cut it (as far as I know). The data is in this format: Category SubCategory Name Description Category#1 SubCategory#1 Product#1 Description#1 Category#1 SubCategory#1 Product#2 Description#2 Category#1 SubCategory#2 Product#3 Description#3 Category#1 SubCategory#2 Product#4 Description#4 Category#2 SubCategory#3 Product#5 Description#5 (apologies I'm lacking the inventiveness to come up with 'real' data at this time in the morning...) Each row contains a unique product, but the cateogry structure is duplicated. I want to import this data into three tables: Category SubCategory Product (I know SubCategory should really be contained within Category, DB was not my design) I need a way to import unique rows based on the Category and then SubCategory columns, and then when importing the other columns into Product, obtain a reference to the SubCategory based on name. Short of scripting this, is there any way to do it using the import wizard or some other tool?

    Read the article

  • Database system that is not relational.

    - by paan
    What are the other types of database systems out there. I've recently came across couchDB that handles data in a non relational way. It got me thinking about what other models are other people is using. So, I want to know what other types of data model is out there. (I'm not looking for any specifics, just want to look at how other people are handling data storage, my interest are purely academic) The ones I already know are: RDBMS (mysql,postgres etc..) Document based approach (couchDB, lotus notes) Key/value pair (BerkeleyDB)

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • How to design database for tests in online test application

    - by Kien Thanh
    I'm building an online test application, the purpose of app is, it can allow teacher create courses, topics of course, and questions (every question has mark), and they can create tests for students and students can do tests online. To create tests of any courses for students, first teacher need to create a test pattern for that course, test pattern actually is a general test includes the number of questions teacher want it has, then from that test pattern, teacher will generate number of tests corresponding with number of students will take tests of that course, and every test for student will has different number of questions, although the max mark of test in every test are the same. Example if teacher generate tests for two students, the max mark of test will be 20, like this: Student A take test with 20 questions, student B take test only has 10 questions, it means maybe every question in test of student A only has mark is 1, but questions in student B has mark is 2. So 20 = 10 x 2, sorry for my bad English but I don't know how to explain it better. I have designed tables for: - User (include students and teachers account) - Course - Topic - Question - Answer But I don't know how to define associations between user and test pattern, test, question. Currently I only can think these: Test pattern table: name, description, dateStart, dateFinish, numberOfMinutes, maxMarkOfTest Test table: test_pattern_id And when user (is Student) take tests, I think i will have one more table: Result: user_id, test_id, mark but I can't set up associations among test pattern and test and question. How to define associations?

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Should all foreign table references use foreign key constraints

    - by TecBrat
    Closely related to: Foreign key restrictions -> yes or no? I asked a question on SO and it led me to ask this here. If I'm faced with a choice of having a circular reference or just not enforcing the restraint, which is the better choice? In my particular case I have customers and addresses. I want an address to have a reference to a customer and I want each customer to have a default billing address id and a default shipping address id. I might query for all addresses that have a certain customer ID or I might query for the address with the ID that matches the default shipping or billing address ids. I'm not sure yet how the constraints (or lack of) will effect the system as my application and it's data age.

    Read the article

  • Relationships in a Chen ERD

    - by Nibroc A Rehpotsirhc
    I am working on a Chen ERD to model our organizations merchandise. Our central entity is a Style. We have supplemental entities of Color and Season. I am defining our assortment as the relationship between these three entities, and this relationship itself will have attributes and is defined by the three entities which participate in the mandatory relationship. The rules are; Many Styles can be offered in a Season, and a Style can be offered in many Seasons. Within a Season, a Style can be offered in Many Colors. I then have 2 other entities, one of which I believe is a weak entity, Climate, and the other may be weak, but I am not sure, this being Transaction Channel. I am thinking of these as relationships off of a relationship? Meaning, for a given Style/Color combination offered in a Season, it can be available through 1 or more Transaction Channels. Additionally, within a season, a given Style/Color combination can be intended for 1 or more Climates. Is it valid to have relationships off of relationships? Or does this requirement dictate that I should think of this Style/Color/Season relationship as an entity itself, and define the relationships to Climate and Transaction Channel off of this entity?

    Read the article

  • Why many designs ignore normalization in RDBMS?

    - by Yosi
    I got to see many designs that normalization wasn't the first consideration in decision making phase. In many cases those designs included more than 30 columns, and the main approach was "to put everything in the same place" According to what I remember normalization is one of the first, most important things, so why is it dropped so easily sometimes? Edit: Is it true that good architects and experts choose a denormalized design while non-experienced developers choose the opposite? What are the arguments against starting your design with normalization in mind?

    Read the article

  • SQL language drawbacks, The Third Manifesto

    - by David Portabella
    Sometime ago I read about SQL language drawbacks (the basic language specification, not vendor specific), and one of the drawbacks was that the language does not allow to create a set of tuples that don't come from a table. For instance, SELECT firstName, lastName from people; this creates a set of tuples coming from the table people. Now, if I don't have this table people, and I want to return a constant, I'd need something like this to return a set of two tuples (this would not require to have a table): SELECT VALUES('james', 'dean'), ('tom', 'cruisse'); Why I would need that? Because of the same reasons that we can define constants (not only basic types, but objects and arrays also) in any advanced programming language. Workarounds, Yes, I could create a temporal table, fill the data, and SELECT from that table. This is a hack, to overcome the drawbacks of the poor SQL language. I think that I read about this somewhere in "The Third Manifesto", but I don't find the paragraph/example talking about this concrete drawback anymore. Do you know a reference about it?

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • when should a database table be broken into multiple tables with relations?

    - by GSto
    I have an application that needs to store client data, and part of that is some data about their employer as well. Assuming that a client can only have one employer, and that the chance of people having identical employer data is slim to none, which schema would make more sense to use? Schema 1 Client Table: ------------------- id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), employer_name varchar(255), employer_phone varchar(255), employer_address varchar(255), employer_city varchar(255), employer_state char(2), employer_zip varchar(16) **Schema 2** Client Table ------------------ id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), Employer Table --------------------- id int name varchar(255), phone varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16) patient_id int Part of me thinks that since are clearly two different 'objects' in the real world, seperating them out into two different tables makes sense. However, since a client will always have an employer, I'm also not seeing any real benefits to seperating them out, and it would make querying data about clients more complex. Is there any benefit / reason for creating two tables in a situation like this one instead of one?

    Read the article

  • Storing translation data as JSON column

    - by j0ntech
    We're deciding on how to store translations of some descriptions of database items. We could go the traditional way and keep a translations table (and a language table and an object_translation linking table) OR we thought it might be better to just have a Description column that contains JSON like the following: { "EN": "This is the translation in English", "EE" : "See on kirjeldus eesti keeles" } Are there any serious downsides as to why we shouldn't use this? (I haven't seen it being used anywhere else)

    Read the article

  • Type of AI to tackle this problem?

    - by user1154277
    I posted this on stackoverflow but want to get your recommendations as well as a user on overflow recommended I post it here. I'm going to say from the beginning that I am not a programmer, I have a cursory knowledge of different types of AI and am just a businessman building a web app. Anyways, the web app I am investing in to develop is for a hobby of mine. There are many part manufacturers, product manufacturers, upgrade and addon manufacturers etc. for hardware/products in this hobby's industry. Currently, I am in the process of building a crowd sourced platform for people who are knowledgeable to go in and mark up compatibility between those parts as its not always clear cut if they are for example: Manufacturer A makes a "A" class product, and manufacturer B makes upgrade/part that generally goes with class "A" products, but is for one reason or another not compatible with Manufacturer A's particular "A" class product. However, a good chunk (60%-70%) of the products/parts in the database can have their compatibility inferenced by their properties, For example: Part 1 is type "A" with "X" mm receiver and part 2 is also Type "A" with "X" mm interface and thus the two parts are compatible.. or Part 1 is a 8mm gear, thus all bushings of 8mm from any manufacturer is compatible with part 1. Further more, all gears can only have compatibility relationships in the database with bushing and gear boxes, but there can be no meaningful compatibility between a gear and a rail, or receiver since those parts don't interface. Now what I want is an AI to be able to learn from the decisions of the crowdsourced platform community and be able to inference compatibility for new parts/products based on their tagged attributes, what type of part they are etc. What would be the best form of AI to tackle this? I was thinking a Expert System, but explicitly engineering all of the knowledge rules would be daunting because of the complex relations between literally tens of thousands of parts, hundreds of part types and many manufacturers. Would a ANN (neural network) be ideal to learn from the many inputs/decisions of the crowdsource platform users? Any help/input is much appreciated.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >