Search Results

Search found 56342 results on 2254 pages for 'versant object database'.

Page 486/2254 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Maximum Row in DBMS

    - by Am1rr3zA
    Is there any limit to maximum row of table in DBMS (specially MySQL)? I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.

    Read the article

  • How to structure the tables of a very simple blog in MySQL?

    - by Programmer
    I want to add a very simple blog feature on one of my existing LAMP sites. It would be tied to a user's existing profile, and they would be able to simply input a title and a body for each post in their blog, and the date would be automatically set upon submission. They would be allowed to edit and delete any blog post and title at any time. The blog would be displayed from most recent to oldest, perhaps 20 posts to a page, with proper pagination above that. Other users would be able to leave comments on each post, which the blog owner would be allowed to delete, but not pre-moderate. That's basically it. Like I said, very simple. How should I structure the MySQL tables for this? I'm assuming that since there will be blog posts and comments, I would need a separate table for each, is that correct? But then what columns would I need in each table, what data structures should I use, and how should I link the two tables together (e.g. any foreign keys)? I could not find any tutorials for something like this, and what I'm looking to do is really offer my users the simplest version of a blog possible. No tags, no moderation, no images, no fancy formatting, etc. Just a simple diary-type, pure-text blog with commenting by other users.

    Read the article

  • Dummies guide to locking in innodb

    - by ming yeow
    The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking" I will start, and I will gather all responses as a wiki: The column needs to be indexed before row level locking applies. EXAMPLE: delete row where column1=10; will lock up the table unless column1 is indexed

    Read the article

  • Join Where Rows Don't Exist or Where Criteria Matches...?

    - by Greg
    I'm trying to write a query to tell me which orders have valid promocodes. Promocodes are only valid between certain dates and optionally certain packages. I'm having trouble even explaining how this works (see psudo-ish code below) but basically if there are packages associated with a promocode then the order has to have one of those packages and be within a valid date range otherwise it just has to be in a valid date range. The whole "if PrmoPackage rows exist" thing is really throwing me off and I feel like I should be able to do this without a whole bunch of Unions. (I'm not even sure if that would make it easier at this point...) Anybody have any ideas for the query? if `OrderPromoCode` = `PromoCode` then if `OrderTimestamp` is between `PromoStartTimestamp` and `PromoEndTimestamp` then if `PromoCode` has packages associated with it //yes then if `PackageID` is one of the specified packages //yes code is valid //no invalid //no code is valid Order: OrderID* | OrderTimestamp | PackageID | OrderPromoCode 1 | 1/2/11 | 1 | ABC 2 | 1/3/11 | 2 | ABC 3 | 3/2/11 | 2 | DEF 4 | 4/2/11 | 3 | GHI Promo: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* ABC | 1/1/11 | 2/1/11 ABC | 3/1/11 | 4/1/11 DEF | 1/1/11 | 1/11/13 GHI | 1/1/11 | 1/11/13 PromoPackage: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* | PackageID* ABC | 1/1/11 | 2/1/11 | 1 ABC | 1/1/11 | 2/1/11 | 3 GHI | 1/1/11 | 1/11/13 | 1 Desired Result: OrderID | IsPromoCodeValid 1 | 1 2 | 0 3 | 1 4 | 0

    Read the article

  • Higher speed options for executing very large (20 GB) .sql file in MySQL

    - by Jonogan
    My firm was delivered a 20+ GB .sql file in reponse to a request for data from the gov't. I don't have many options for getting the data in a different format, so I need options for how to import it in a reasonable amount of time. I'm running it on a high end server (Win 2008 64bit, MySQL 5.1) using Navicat's batch execution tool. It's been running for 14 hours and shows no signs of being near completion. Does anyone know of any higher speed options for such a transaction? Or is this what I should expect given the large file size? Thanks

    Read the article

  • Sqlite and Python -- return a dictionary using fetchone()?

    - by AndrewO
    I'm using sqlite3 in python 2.5. I've created a table that looks like this: create table votes ( bill text, senator_id text, vote text) I'm accessing it with something like this: v_cur.execute("select * from votes") row = v_cur.fetchone() bill = row[0] senator_id = row[1] vote = row[2] What I'd like to be able to do is have fetchone (or some other method) return a dictionary, rather than a list, so that I can refer to the field by name rather than position. For example: bill = row['bill'] senator_id = row['senator_id'] vote = row['vote'] I know you can do this with MySQL, but does anyone know how to do it with SQLite? Thanks!!!

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • What is the best design for these data base tables?

    - by Mohammed Jamal
    I need to find the best solution to make the DB Normalized with large amount of data expected. My site has a Table Tags (contain key word,id) and also 4 types of data related to this tags table like(articles,resources,jobs,...). The big question is:- for the relation with tags what best solution for optimazaion & query speed? make a table for each relation like: table articlesToTags(ArticleID,TagID) table jobsToTags(jobid,tagid) etc. or put it all in one table like table tagsrelation(tagid,itemid,itemtype) I need your help. Please provide me with articles to help me in this design consider that in future the site can conation new section relate to tag Thanks

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • Select where and where not

    - by Simon
    I have a table containing lessons that I called "cours" (french) and I have several cours inside and I have linked them to students with a table between them to see if they go to the lessons or not. I would like to return data with the SELECT and the data that are NOT select. So, If one student follow 3 courses of 5, I would like to return the 3 courses that he follow and the 2 courses that he doesn't follow. Is there a way to do it ?

    Read the article

  • SQL Server Concatinate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • about null values!

    - by user329820
    Hi I have a question that if we declare a variable and then do not set it explicitly to null value then it would be null outomatically ,i mean that the below code will return true or false ? thanks DECLARE @val CHAR(4) If @val = NULL

    Read the article

  • How to secure phpMyAdmin

    - by Andrei
    Hi, I have noticed that there are strange requests to my website trying to find phpmyadmin, like /phpmyadmin/ /pma/ etc. Now I have installed PMA on Ubuntu via apt and would like to access it via webaddress different from /phpmyadmin/. What can I do to change it? Thanks

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Postgresql 9.1: ERROR: type "citext" does not exist

    - by gotuskar
    I am trying to execute following query through PgAdmin utility. CREATE TABLE svcr."EventLogs" ("eventId" BIGINT NOT NULL, "eventTime" TIMESTAMP WITH TIME ZONE NOT NULL, "userid" CITEXT, "realmid" CITEXT NOT NULL, "onUserid" CITEXT, "description" TEXT, CONSTRAINT eventlogs_pkey PRIMARY KEY ("eventId")); And I get following error - ERROR: type "citext" does not exist SQL state: 42704 Character: 120 However, following query runs fine - CREATE TABLE svcr."CategoryMap" ("category" INT NOT NULL, "userData" INT NOT NULL); What is wrong with the first query?

    Read the article

  • Can I do transactions and locks in CouchDB?

    - by damian
    I need to do transactions (begin, commit or rollback), locks (select for update). How can I do it in a document model db? Edit: The case is this: I want to run an auctions site. And I think how to direct purchase as well. In a direct purchase I have to decrement the quantity field in the item record, but only if the quantity is greater than zero. That is why I need locks and transactions. I don't know how to address that without locks and/or transactions. Can I solve this with CouchDB?

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >