Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 373/1364 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Is there any online free movie information api's?

    - by Gary Willoughby
    For music there is the Gracenote CDDB SDK, etc. but does an online service exist for getting information about movies? The only solution i can see at the minute is querying IMDB and scraping the page. The problem i have is that i have a list of film titles and i want to retrieve stuff like the plot, director, cast, when released, get dvd cover art, etc..

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • SQL Server Concatinate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • Creating Two Cascading Foreign Keys Against Same Target Table/Col

    - by alram
    I have the following tables: user (userid int [pk], name varchar(50)) action (actionid int [pk], description nvarchar(50)) being referenced by another table that captures the relationship: <user1> <action>'s <user2>. I did this with the following table: userAction (userActionId int [pk], actionid int [fk: action.actionid], **userId1 int [fk ref's user.userid; on del/update cascade], userId2 int [fk ref's user.userid; on del/update cascade]**). However, when I try to save the userAction table i get an error because I have two cascading fk's against user.userid. Is there any way to remedy this or must I use a trigger?

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • mysql filtering result using left outer join

    - by user288178
    my query: SELECT content.*, activity_log.content_id FROM content LEFT JOIN activity_log ON content.id = activity_log.content_id AND sess_id = '$sess_id' WHERE activity_log.content_id IS NULL AND visibility = $visibility AND content.reported < ".REPORTED_LIMIT." AND content.file_ready = 1 LIMIT 1 The purpose of that query is to get 1 row from the content table that has not been viewed by the user (identified by session_id), but it still returns contents that have been viewed. What is wrong? ( I have checked the table making sure that the content_ids are there) Note: I think this is more efficient than using subqueries, thoughts?

    Read the article

  • Rails - Scalable calculation model

    - by H O
    I currently have a calculation structure in my rails app that has models metric, operand and operation_type. Presently, the metric model has many operands, and can perform calculations based on the operation_type (e.g. sum, multiply, etc.), and each operand is defined as being right or left (i.e. so that if the operation is division, the numerator and denominator can be identified). Presently, an operand is always an attribute of some model, e.g. @customer.sales.selling_price.sum. In order to make this scalable, in need to allow an operand to be either an attribute of some kind, or the results of a previous operation, i.e. an operand can be a metric. I have included a diagram of how my models currently look: Can anyone assist me with the most elegant way of allowing an operand to be an actual operand, or another metric? Thanks! EDIT: It seems based on the only answer so far that perhaps polymorphic associations are the way to go on this, but the answer is so brief I have no idea how they could be used in this way - can anyone elaborate? EDIT 2: OK, I think I'm getting somewhere - essentially i presently have a metric, which has_many operands, and an operand has_many metrics. I need a polymorphic self join, where a metric can also have many metrics - do I need to call this something else, perhaps calculated_metrics, so that the metric model can use itself? That would leave me with a situation where a metric has_many operands, and a metric has many calculated_metrics.

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • SQL Server Concatenate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • Parsing Data in XML and Storing to DB in Python

    - by Rakesh
    Hi Guys i have problem parsing an xml file and entering the data to sqlite, the format is like i need to enter the chracters before the token like 111,AAA,BBB etc <DOCUMENT> <PAGE width="544.252" height="634.961" number="1" id="p1"> <MEDIABOX x1="0" y1="0" x2="544.252" y2="634.961"/> <BLOCK id="p1_b1"> <TEXT width="37.7" height="74.124" id="p1_t1" x="51.1" y="20.8652"> <TOKEN sid="p1_s11" id="p1_w1" font-name="Verdanae" bold="yes" italic="no">111</TOKEN> </TEXT> </BLOCK> <BLOCK id="p1_b3"> <TEXT width="151.267" height="10.725" id="p1_t6" x="24.099" y="572.096"> <TOKEN sid="p1_s35" id="p1_w22" font-name="Verdanae" bold="yes" italic="yes">AAA</TOKEN> <TOKEN sid="p1_s36" id="p1_w23" font-name="verdanae" bold="yes" italic="no">BBB</TOKEN> <TOKEN sid="p1_s37" id="p1_w24" font-name="verdanae" bold="yes" italic="no">CCC</TOKEN> </TEXT> </BLOCK> <BLOCK id="p1_b4"> <TEXT width="82.72" height="26" id="p1_t7" x="55.426" y="138.026"> <TOKEN sid="p1_s42" id="p1_w29" font-name="verdanae" bold="yes" italic="no">DDD</TOKEN> <TOKEN sid="p1_s43" id="p1_w30" font-name="verdanae" bold="yes" italic="no">EEE</TOKEN> </TEXT> <TEXT width="101.74" height="26" id="p1_t8" x="55.406" y="162.026"> <TOKEN sid="p1_s45" id="p1_w31" font-name="verdanae" bold="yes" italic="no">FFF</TOKEN> </TEXT> <TEXT width="152.96" height="26" id="p1_t9" x="55.406" y="186.026"> <TOKEN sid="p1_s47" id="p1_w32" font-name="verdanae" bold="yes" italic="no">GGG</TOKEN> <TOKEN sid="p1_s48" id="p1_w33" font-name="verdanae" bold="yes" italic="no">HHH</TOKEN> </TEXT> </BLOCK> </PAGE> </DOCUMENT> in .net it is done with 3 foreach loops 1. for "DOCUMENT/PAGE/BLOCK" 2."TEXT" 3. "TOKEN" and then it is entered into the DB i dont get how to do it in python and i am trying it with lxml module

    Read the article

  • What is the most efficient procedure for implementing a sortable ajax list on the backend?

    - by HenryL
    The most common method is to assign a sequential order field for each item in the list and do an update that maintains the sequence with every ajax sort operation. Unfortunately, this requires an update to each item of the list every time someone sorts. This is fine for small lists, but what's the best way to implement sorting for larger lists that are constantly updated? I am looking for something that minimizes DB IO.

    Read the article

  • Hierarchical Hibernate, how many queries are executed?

    - by ghost1
    So I've been dealing with a home brew DB framework that has some seriously flaws, the justification for use being that not using an ORM will save on the number of queries executed. If I'm selecting all possibile records from the top level of a joinable object hierarchy, how many separate calls to the DB will be made when using an ORM (such as Hibernate)? I feel like calling bullshit on this, as joinable entities should be brought down in one query , right? Am I missing something here? note: lazy initialization doesn't matter in this scenario as all records will be used.

    Read the article

  • VB6. DataEnvironment: How to Configure/Set Trusted Connection?

    - by Mblua
    Hi, I have a DataEnvironment component in my VB 6 application. I cant configure the DataEnvironment connection to use trusted connection without ask me how to connect. I could set prompt to never appear, but it doesnt connect becouse it doesnt use trusted. In this link you could see the screens of DataEnvironment Connection Options and the prompt. Google Presentation Link I need this program to be executed from a D.O.S console without prompt anything. Thanks a lot!

    Read the article

  • Data scheme question

    - by Matt
    I am designing a data model for a local city page, more like requirements for it. So 4 tables: Country, State, City, neighbourhood. Real life relationships is: Country owns multiple State which owns multiple cities which ows multiple neighbourhoods. In the data model: Do we link these with FK the same way or link each with each? Like in each table there will even be a Country ID, State ID, CityID and NeighbourhoodID so each connected with each? Other wise to reach neighbourhood from country we need to join 2 other tables in between? There are more tables I need to maintain for IP addess of cities, latitude/longitude, etc.

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • When using a HiLo ID generation strategy, what types should be used to hold Ids?

    - by UpTheCreek
    I'm asking this from a c#/NHibnernate perspective, but it's generally applicable. The concern is that the HiLo strategy goes though id's pretty quickly, and for example a low record-count table (Such as Users) is sharing from the same set of id's as a high record-count table (Such as comments). So you can potentially get to high numbers quicker that with other strategies. So what do people recommend? Code side: int/uint/long/ulong? DBSide: int/bigint? My feeling is to go with longs and bigingts, but would like a sanity check :)

    Read the article

  • Join Where Rows Don't Exist or Where Criteria Matches...?

    - by Greg
    I'm trying to write a query to tell me which orders have valid promocodes. Promocodes are only valid between certain dates and optionally certain packages. I'm having trouble even explaining how this works (see psudo-ish code below) but basically if there are packages associated with a promocode then the order has to have one of those packages and be within a valid date range otherwise it just has to be in a valid date range. The whole "if PrmoPackage rows exist" thing is really throwing me off and I feel like I should be able to do this without a whole bunch of Unions. (I'm not even sure if that would make it easier at this point...) Anybody have any ideas for the query? if `OrderPromoCode` = `PromoCode` then if `OrderTimestamp` is between `PromoStartTimestamp` and `PromoEndTimestamp` then if `PromoCode` has packages associated with it //yes then if `PackageID` is one of the specified packages //yes code is valid //no invalid //no code is valid Order: OrderID* | OrderTimestamp | PackageID | OrderPromoCode 1 | 1/2/11 | 1 | ABC 2 | 1/3/11 | 2 | ABC 3 | 3/2/11 | 2 | DEF 4 | 4/2/11 | 3 | GHI Promo: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* ABC | 1/1/11 | 2/1/11 ABC | 3/1/11 | 4/1/11 DEF | 1/1/11 | 1/11/13 GHI | 1/1/11 | 1/11/13 PromoPackage: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* | PackageID* ABC | 1/1/11 | 2/1/11 | 1 ABC | 1/1/11 | 2/1/11 | 3 GHI | 1/1/11 | 1/11/13 | 1 Desired Result: OrderID | IsPromoCodeValid 1 | 1 2 | 0 3 | 1 4 | 0

    Read the article

  • Exceptions in ASP.MVC

    - by George
    Hello guys, I'm here again with another question about MVC. Here is the deal. I have a simple table/class with an Id and a Name. Names suppossed to be unique, and are modeled like that in the DB. I created my controller and everything just works fine. But if I try to insert a name that already exists, an exception should be thrown. I'm just not finding what is the correct kind of exception and it's namespace. The error must be coming from the DB, so... Any ideas? Thanks

    Read the article

  • Postgresql 9.1: ERROR: type "citext" does not exist

    - by gotuskar
    I am trying to execute following query through PgAdmin utility. CREATE TABLE svcr."EventLogs" ("eventId" BIGINT NOT NULL, "eventTime" TIMESTAMP WITH TIME ZONE NOT NULL, "userid" CITEXT, "realmid" CITEXT NOT NULL, "onUserid" CITEXT, "description" TEXT, CONSTRAINT eventlogs_pkey PRIMARY KEY ("eventId")); And I get following error - ERROR: type "citext" does not exist SQL state: 42704 Character: 120 However, following query runs fine - CREATE TABLE svcr."CategoryMap" ("category" INT NOT NULL, "userData" INT NOT NULL); What is wrong with the first query?

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >