Search Results

Search found 1148 results on 46 pages for 'backbone relational'.

Page 7/46 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How do I insert data into a object relational table with multiple ref in the schema.

    - by Yiling
    I have a table with a schema of Table(number, ref, ref, varchar2, varchar2,...). How would I insert a row of data into this table? When I do: "insert into table values (1, select ref(p), ref(d), '239 F.3d 1343', '35 USC § 283', ... from plaintiff p, defendant d where p.name='name1' and d.name='name2');" I get a "missing expression" error. If I do: "insert into table 1, select ref(p), ref(d), ... from plaintiff p, defendant where p.name=...;" I get a "missing keyword VALUES" error.

    Read the article

  • What is the best practise for relational database tables in mysql?

    - by George
    Hi, I know, there is a lot of info on mysql out there. But I was not really able to find an answer to this specific and actually simple question: Let's say I have two tables: USERS (with many fields, e.g. name, street, email, etc.) and GROUPS (also with many fields) The relation is (I guess?) 1:n, that is ONE user can be a member of MANY groups. What I dis, is create another table, named USERS_GROUPS_REL. This table has only two fields: us_id (unique key of table USERS) and gr_id (unique key of table GROUPS) In PHP I do a query with join. Is this "best practice" or is there a better way? Thankful for any hint!

    Read the article

  • Approach for monitoring internet backbone traffic volume

    - by Greg Harman
    I'm interested in getting a picture of relative volume across different internet backbones. In particular, I'd like to see how traffic volume over a given route differs over the course of a day or from one day to the next. InternetTrafficReport.com is the closest approximation to this that I've found online, and their approach is to test ping times to a number of key routers from several geographically-dispersed servers. This sounds like one straightforward way to measure, but I don't have several geographically-dispersed servers. Is there a different approach for sampling this type of information from a single server?

    Read the article

  • Recommendations with hierarchical data on non-relational databases?

    - by Luki
    I'm developing an web application that uses a non-relational database as a backend (django-nonrel + AppEngine). I need to store some hierarchical data (projects/subproject_1/subproject_N/tasks), and I'm wondering which pattern should I use. For now I thought of: Adjacency List (store the item's parent id) Nested sets (store left and right values for the item) In my case, the depth of nesting for a normal user will not exceed 4-5 levels. Also, on the UI, I would like to have a pagination for the items on the first level, to avoid to load too many items at the first page load. From what I understand so far, nested sets are great when the hierarchy is used more for displaying. Adjacency lists are great when editing on the tree is done often. In my case I guess I need the displaying more than the editing (when using nested sets, even if the display would work great, the above pagination could complicate things on editing). Do you have any thoughts and advice, based on your experience with the non-relational databases?

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

  • Algorithm for rating books: Relative perception

    - by suneet
    So I am developing this application for rating books (think like IMDB for books) using relational database. Problem statement : Let's say book "A" deserves 8.5 in absolute sense. In case if A is the best book I have ever seen, I'll most probably rate it 9.5 whereas for someone else, it might be just an average book, so he/they will rate it less (say around 8). Let's assume 4 such guys rate it 8. If there are 10 guys who are like me (who haven't ever read great literature) and they all rate it 9.5-10. This will effectively make it's cumulative rating greater than 9 (9.5*10 + 8*4) / 14 = 9.1 whereas we needed the result to be 8.5 ... How can I take care of(normalize) this bias due to incorrect perception of individuals. MyProposedSolution : Here's one of the ways how I think it could be solved. We can have a variable Lit_coefficient which tells us how much knowledge a user has about literature. If I rate "A"(the book) 9.5 and person "X" rates it 8, then he must have read books much better than "A" and thus his Lit_coefficient should be higher. And then we can normalize the ratings according to the Lit_coefficient of user. Could there be a better algorithm/solution for the same?

    Read the article

  • At what point does caching become necessary for a web application?

    - by Zaemz
    I'm considering the architecture for a web application. It's going to be a single page application that updates itself whenever the user selects different information on several forms that are available that are on the page. I was thinking that it shouldn't be good to rely on the user's browser to correctly interpret the information and update the view, so I'll send the user's choices to the server, and then get the data, send it back to the browser, and update the view. There's a table with 10,000 or so rows in a MySQL database that's going to be accessed pretty often, like once every 5-30 seconds for each user. I'm expecting 200-300 concurrent users at one time. I've read that a well designed relational database with simple queries are nothing for a RDBMS to handle, really, but I would still like to keep things quick for the client. Should this even be a concern for me at the moment? At what point would it be helpful to start using a separate caching service like Memcached or Redis, or would it even be necessary? I know that MySQL caches popular queries and the results, would this suffice?

    Read the article

  • Is it hacky to manually construct JSON and manually handle GET, POST instead of using a proper RESTful API for AJAX functionality?

    - by kliao
    I started building a Django app, but this probably applies to other frameworks as well. In Backbone.js methods that call the server (fetch(), create(), destroy(), etc.), should you be using a proper RESTful API such as one provided by Tastypie or Django-Piston? I've founded it easier and more flexible to just construct the JSON in my Django Views, which are mapped to some URLs that Backbone.js can use. Then again, I'm probably not leveraging Tastypie/Django-Piston functionality to the fullest. I'm not ready to make a full-fledged RESTful API for my app yet. I simply would like to use some of the AJAXy functionality that Backbone.js supports. Pros/Cons of doing this?

    Read the article

  • Embedded non-relational (nosql) data store

    - by Igor Brejc
    I'm thinking about using/implementing some kind of an embedded key-value (or document) store for my Windows desktop application. I want to be able to store various types of data (GPS tracks would be one example) and of course be able to query this data. The amount of data would be such that it couldn't all be loaded into memory at the same time. I'm thinking about using sqlite as a storage engine for a key-value store, something like y-serial, but written in .NET. I've also read about FriendFeed's usage of MySQL to store schema-less data, which is a good pointer on how to use RDBMS for non-relational data. sqlite seems to be a good option because of its simplicity, portability and library size. My question is whether there are any other options for an embedded non-relational store? It doesn't need to be distributable and it doesn't have to support transactions, but it does have to be accessible from .NET and it should have a small download size. UPDATE: I've found an article titled SQLite as a Key-Value Database which compares sqlite with Berkeley DB, which is an embedded key-value store library.

    Read the article

  • Relational Clausal Logic question: what is a Herbrand interpretation

    - by anotherstat
    I'm having a hard time coming to grips with relational clausal logic, and I'm not sure if this is the place to ask but it would be help me so much with revision if anyone could provide guidance with the following questions. Let P be the program: academic(X); student(X); other_staff(X):- works_in(X, university). :-student(john). :-other_staff(john). works_in(john, university) Question: Which are the Herbrand interpreations of P? AS

    Read the article

  • php doctrine relational form naming for saving data

    - by neziric
    is it possible to have an html form organized/named in such a way that $foo-fromArray($_POST) would actually save relational data as well? example: html_form_fields: user_name country_name db_table_users: id user_name db_table_countries: id country_name update: forgot to say i'm trying to make this with zend framework forms

    Read the article

  • Lightweight Relational database for BlackBerry OS 4.7

    - by Pavel
    Hey guys, I'm writing an app for BlackBerry OS 4.7 and would greatly benefit from having a lightweight relational database such as SQLite that my application can use to store data locally on the device. SQLite is coming out with 5.0, which is still in beta. Can anyone recommend any other alternatives that permit commercial use? Additional information: - Concurrent access not required - Transactions not required Thanks in advance :-)

    Read the article

  • Object Oriented vs Relational Databases

    - by Dan
    Objects oriented databases seem like a really cool idea to me, no need to worry about mapping your domain model to your database model, no messing around with sql or ORM tools. The way I understand it, relational DBs offer some advantages when there is massive amounts of data, and searching an indexing need to be done. To my mind 99% of websites are not massive, and enterprise issues never need to be thought about, so why arn't OO DBs more widely used?

    Read the article

  • Why is Backbone.js a bad option in the Technology radar 2012 of thoughtworks?

    - by Cfontes
    In the latest Technology Radar 2012 they state that Backbone.js has pushed to far on it's MVC abstraction and say that Knockout.js or Angular.js should be used instead. I cannot get why they think that Backbone.js model is bad, for me it's just a way to create a standard so people can have some kind of roadmap to dev frontend JS without Spaghetti code. Also for me Angular and Knockout solve a different problem, I like both of them but having to code all MVC classes is something I think is kind of a rework. The thing is simple easy extendable and fast to learn, comes with a lot of goodies and is easy to combine with other Frameworks. (see Knockback.js) Can anybody tell me what made it so bad to their eyes ?

    Read the article

  • Creating an object relational schema from a Class diagram

    - by Caylem
    Hi Ladies and Gents. I'd like some help converting the following UML diagram: UML Diagram The diagram shows 4 classes and is related to a Loyalty card scheme for an imaginary supermarket. I'd like to create an object relational data base schema from it for use with Oracle 10g/11g. Not sure where to begin, if somebody could give me a head start that would be great. Looking for actually starting the schema, show abstraction, constraints, types(subtypes, supertypes) methods and functions. Note: I'm not looking for anyone to make any comments regarding the actual classes and whether changes should be made to the Diagram, just the schema. Thanks

    Read the article

  • Import de-normalized relational data from Excel into SQL Server

    - by roryf
    I need to import data from an Excel spreadsheet into SQL Server, but the data isn't in a relational/normalized format so the import wizard isn't going to cut it (as far as I know). The data is in this format: Category SubCategory Name Description Category#1 SubCategory#1 Product#1 Description#1 Category#1 SubCategory#1 Product#2 Description#2 Category#1 SubCategory#2 Product#3 Description#3 Category#1 SubCategory#2 Product#4 Description#4 Category#2 SubCategory#3 Product#5 Description#5 (apologies I'm lacking the inventiveness to come up with 'real' data at this time in the morning...) Each row contains a unique product, but the cateogry structure is duplicated. I want to import this data into three tables: Category SubCategory Product (I know SubCategory should really be contained within Category, DB was not my design) I need a way to import unique rows based on the Category and then SubCategory columns, and then when importing the other columns into Product, obtain a reference to the SubCategory based on name. Short of scripting this, is there any way to do it using the import wizard or some other tool?

    Read the article

  • Database system that is not relational.

    - by paan
    What are the other types of database systems out there. I've recently came across couchDB that handles data in a non relational way. It got me thinking about what other models are other people is using. So, I want to know what other types of data model is out there. (I'm not looking for any specifics, just want to look at how other people are handling data storage, my interest are purely academic) The ones I already know are: RDBMS (mysql,postgres etc..) Document based approach (couchDB, lotus notes) Key/value pair (BerkeleyDB)

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • How to Implement Backbone Java Logic Code into Android

    - by lord_sneed
    I wrote a program to work from the console in Eclipse and the terminal window in Linux. I am now transforming it into an Android app and I have the basic functionality of the Android UI done up until the point where it needs to use the logic from the Java file of the program I wrote. All of my inputs from the Java file are currently from the keyboard (from Scanners). My question is: how do I transform this to get it work with the user interaction of the app? The only input would be from the built in NumberPicker. Should I copy and paste the code from the Java program to the activity file in the onCreate method and change all of the input methods (Scanners) to work with the Android input? Or do I create variables in the activity file and pass them to the Java program (in the separate class)? (If so, how would I do that? the Java file starts from the main method: public static void main(String[] args) {) Also, will the print statements I have, System.out.println(...);, translate directly into the Android UI and print on the screen or do I have to modify those?

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • How to design database for tests in online test application

    - by Kien Thanh
    I'm building an online test application, the purpose of app is, it can allow teacher create courses, topics of course, and questions (every question has mark), and they can create tests for students and students can do tests online. To create tests of any courses for students, first teacher need to create a test pattern for that course, test pattern actually is a general test includes the number of questions teacher want it has, then from that test pattern, teacher will generate number of tests corresponding with number of students will take tests of that course, and every test for student will has different number of questions, although the max mark of test in every test are the same. Example if teacher generate tests for two students, the max mark of test will be 20, like this: Student A take test with 20 questions, student B take test only has 10 questions, it means maybe every question in test of student A only has mark is 1, but questions in student B has mark is 2. So 20 = 10 x 2, sorry for my bad English but I don't know how to explain it better. I have designed tables for: - User (include students and teachers account) - Course - Topic - Question - Answer But I don't know how to define associations between user and test pattern, test, question. Currently I only can think these: Test pattern table: name, description, dateStart, dateFinish, numberOfMinutes, maxMarkOfTest Test table: test_pattern_id And when user (is Student) take tests, I think i will have one more table: Result: user_id, test_id, mark but I can't set up associations among test pattern and test and question. How to define associations?

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >