Search Results

Search found 26283 results on 1052 pages for 'temporary table'.

Page 314/1052 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • h2 (embedded mode ) database files problem

    - by aeter
    There is a h2-database file in my src directory (Java, Eclipse): h2test.db The problem: starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them; when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows. Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code). I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • NSOutlineview - strange behavior after reloading data

    - by matei
    I have a NSOutlineView which loads data from a data source. The table is displayed in a panel. The items shown in the table are items which belong to an object (the relation is one-to many between the object and the items). I have a list of objects in a combo box (in fact a NSPopupButton), and when I select another object in the combo box, I want it's items to be shown in the table (the NSOutlineView). I managed to do all this , however when I select another object from the combo box, not all of it's items are displayed. I have put some logging messages in the data source and it seems that there are some items that are being returned from the data source , but are not shown in the table (and are not queried for children). Now the strange part is that when I click the main window (as I said, all that I described here is in a NSPanel loaded on top of the window) , all the data is displayed correctly. It seems as if clicking the main window triggers something in the NSOutlineView that makes it display the missing items, but I can't tell what it is.

    Read the article

  • asp.net Membership : Extending Role membership?

    - by mark smith
    Hi there, I am been taking a look at asp.net membership and it seems to provide everything that i need but i need some kind of custom Role functionality. Currently i can add user to a role, great. But i also need to be able to add Permissions to Roles.. i.e. Role: Editor Permissions: Can View Editor Menu, Can Write to Editors Table, Can Delete Entries in Editors Table. Currently it doesn't support this, The idea behind this is to create a admin option in my program to create a role and then assign permissions to a role to say "allow the user to view a certain part of the application", "allow the user to open a menu item" Any ideas how i would implement soemthing like this? I presume a custom ROLE provider but i was wondering if some kind of framework extension existed already without rolling my own? Or anybody knows a good tutorial of how to tackle this issue? I am quite happy with what asp.net SQL provider has created in terms of tables etc... but i think i need to extend this by adding another table called RolesPermissions and then I presume :-) adding some kind of enumeration into the table for each valid permission?? THanks in advance

    Read the article

  • How do you concat multiple rows into one column in SQL Server?

    - by Jason
    I've searched high and low for the answer to this, but I can't figure it out. I'm relatively new to SQL Server and don't quite have the syntax down yet. I have this datastructure (simplified): Table "Users" | Table "Tags": UserID UserName | TagID UserID PhotoID 1 Bob | 1 1 1 2 Bill | 2 2 1 3 Jane | 3 3 1 4 Sam | 4 2 2 ----------------------------------------------------- Table "Photos": | Table "Albums": PhotoID UserID AlbumID | AlbumID UserID 1 1 1 | 1 1 2 1 1 | 2 3 3 1 1 | 3 2 4 3 2 | 5 3 2 | I'm looking for a way to get the all the photo info (easy) plus all the tags for that photo concatenated like CONCAT(username, ', ') AS Tags of course with the last comma removed. I'm having a bear of a time trying to do this. I've tried the method in this article but I get an error when I try to run the query saying that I can't use DECLARE statements... do you guys have any idea how this can be done? I'm using VS08 and whatever DB is installed in it (I normally use MySQL so I don't know what flavor of DB this really is... it's an .mdf file?)

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • How to print a Perl 2-dimensional array?

    - by Matt Pascoe
    I am trying to write a simple Perl script that reads a *.csv, places the rows of the *.csv file in a two dimensional array, and then prints on item out of the array and then prints a row of the array. #!/usr/bin/perl use strict; use warnings; open(CSV, $ARGV[0]) || die("Cannot open the $ARGV[0] file: $!"); my @row; my @table; while(<CSV>) { @row = split(/\s*,\s*/, $_); push(@table, @row); } close CSV || die $!; foreach my $element ( @{ $table[0] } ) { print $element, "\n"; } print "$table[0][1]\n"; When I run this script I receive the following error and nothing prints: Can't use string ("1") as an ARRAY ref while "strict refs" in use at ./scripts.pl line 16. I have looked in a number of other forums and am still not sure how to fix this issue. Can anyone help me fix this script?

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • phpmyadmin shows numbers or blob for mysql's utf8_bin callation columns?

    - by marc40000
    Hi ! I have a table with a varchar column. Its collation is set to utf8_bin. My software using this table and column works perfectly. But when I look at the content in phpmyadmin, I only see some hex values or [Blob xB]. Can I make phpmyadmin show the content correctly? Besides, when I set the collation to utf8_general_ci or utf8_unicode_ci, the phpmyadmin shows the content correctly. Thx Marc [edit]Hah, I found out, there is a small "+Options" link above every table in phpmyadmin. It opens several options including "Show BLOB contents" - which makes the [blob] to readable text when enabled and "Show binary contents as HEX" which shows the hex codes as text when disabled. No idea why there are two options though and why sometimes there is a [Blob] and sometimes hex values. Well. Now I'm still wondering: Setting these options get lost when I go to another table. I have to set them every time I go there. Is there a way to save those options? [/edit]

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • How do I set YUI2 paginator to select a page other than the first page?

    - by Jeremy Weathers
    I have a YUI DataTable (YUI 2.8.0r4) with AJAX pagination. Each row in the table links to a details/editing page and I want to link from that details page back to the list page that includes the record from the details page. So I have to a) offset the AJAX data correctly and b) tell YAHOO.widget.Paginator which page to select. According to my reading of the YUI API docs, I have to pass in the initialPage configuration option. I've attempted this, but it doesn't take (the data from AJAX is correctly offset, but the paginator thinks I'm on page 1, so clicking "next" takes me from e.g. page 6 to page 2. What am I not doing (or doing wrong)? Here's my DataTable building code: (function() { var columns = [ {key: "retailer", label: "Retailer", sortable: false, width: 80}, {key: "publisher", label: "Publisher", sortable: false, width: 300}, {key: "description", label: "Description", sortable: false, width: 300} ]; var source = new YAHOO.util.DataSource("/sales_data.json?"); source.responseType = YAHOO.util.DataSource.TYPE_JSON; source.responseSchema = { resultsList: "records", fields: [ {key: "url"}, {key: "retailer"}, {key: "publisher"}, {key: "description"} ], metaFields: { totalRecords: "totalRecords" } }; var LoadingDT = function(div, cols, src, opts) { LoadingDT.superclass.constructor.call( this, div, cols, src, opts); // hide the message tbody this._elMsgTbody.style.display = "none"; }; YAHOO.extend(LoadingDT, YAHOO.widget.DataTable, { showTableMessage: function(msg) { $('sales_table_overlay').clonePosition($('sales_table').down('table')). show(); }, hideTableMessage: function() { $('sales_table_overlay').hide(); } }); var table = new LoadingDT("sales_table", columns, source, { initialRequest: "startIndex=125&results=25", dynamicData: true, paginator: new YAHOO.widget.Paginator({rowsPerPage: 25, initialPage: 6}) }); table.handleDataReturnPayload = function(oRequest, oResponse, oPayload) { oPayload.totalRecords = oResponse.meta.totalRecords; return oPayload; }; })();

    Read the article

  • Regular Expression doesn't match

    - by dododedodonl
    Hi All, I've got a string with very unclean HTML. Before I parse it, I want to convert this: <TABLE><TR><TD width="33%" nowrap=1><font size="1" face="Arial"> NE </font> </TD> <TD width="33%" nowrap=1><font size="1" face="Arial"> DEK </font> </TD> <TD width="33%" nowrap=1><font size="1" face="Arial"> 143 </font> </TD> </TR></TABLE> in NE DEK 143 so it is a bit easier to parse. I've got this regular expression (RegexKitLite): NSString *str = [dataString stringByReplacingOccurrencesOfRegex:@"<TABLE><TR><TD width=\"33%\" nowrap=1><font size=\"1\" face=\"Arial\">(.+?)<\\/font> <\\/TD>(.+?)<TD width=\"33%\" nowrap=1><font size=\"1\" face=\"Arial\">(.+?)<\\/font> <\\/TD>(.+?)<TD width=\"33%\" nowrap=1><font size=\"1\" face=\"Arial\">(.+?)<\\/font> <\\/TD>(.+?)<\\/TR><\\/TABLE>" withString:@"$1 $3 $5"]; I'm no an expert in Regex. Can someone help me out here? Regards, dodo

    Read the article

  • Processing potentially large STDIN data, more than once

    - by d11wtq
    I'd like to provide an accessor on a class that provides an NSInputStream for STDIN, which may be several hundred megabytes (or gigabytes, though unlikely, perhaps) of data. When I caller gets this NSInputStream it should be able to read from it without worrying about exhausting the data it contains. In other words, another block of code may request the NSInputStream and will expect to be able to read from it. Without first copying all of the data into an NSData object which (I assume) would cause memory exhaustion, what are my options for handling this? The returned NSInputStream does not have to be the same instance, it simply needs to provide the same data. The best I can come up with right now is to copy STDIN to a temporary file and then return NSInputStream instances using that file. Is this pretty much the only way to handle it? Is there anything I should be cautious of if I go the temporary file route?

    Read the article

  • NHibernate update using composite key

    - by Mahesh
    Hi, I have a table defnition as given below: License ClientId Type Total Used ClientId and Type together uniquely identifies a row. I have a mapping file as given below: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="Acumen.AAM.Domain.Model.License, Acumen.AAM.Domain" lazy="false" table="License"> <id name="ClientId" access="field" column="ClientID" /> <property name="Total" access="field" column="Total"/> <property name="Used" access="field" column="Used"/> <property name="Type" access="field" column="Type"/> </class> </hibernate-mapping> If a client used a license to create a user, I need to update the Used column in the table. As I set ClientId as the id column for this table, I am getting TooManyRowsAffectedException. could you please let me know how to set a composite key at mapping level so that NHibernate can udpate based on ClientId and Type. Something like: Update License SET Used=Used-1 WHERE ClientId='xxx' AND Type=1 Please help. Thanks, Mahesh

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • Php, socket and connecting to pop3-server

    - by Ockonal
    Hi guys, I'm trying to connect to the pop3-server with php. Here is my code: $pop3Server = 'mail.roller.ru'; $pop_conn = fsockopen($pop3Server, 110, $errno, $errstr, 10); print fgets($pop_conn, 1024); Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /home/ockonal/public_html/mail_parser.php on line 45 Warning: fsockopen() [function.fsockopen]: unable to connect to mail.roller.ru:110 (php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution) in /home/ockonal/public_html/mail_parser.php on line 45 Warning: fgets() expects parameter 1 to be resource, boolean given in /home/ockonal/public_html/mail_parser.php on line 46 What's wrong? I'm sure in pop3-server address. It doesn't print anything.

    Read the article

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • Need help to create database schema for wholesale online tee store

    - by techiepark
    Hi, I'm currently working on wholesale online t-shirt shop. I have done this for fixed quantity and price, and its working fine. Now i need to do this for variable quantity and price. Here is the reference link, like what i have to do. Basic tables i have created are - CREATE TABLE attribute ( attribute_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, PRIMARY KEY (attribute_id) ); CREATE TABLE attribute_value ( attribute_value_id int(11) NOT NULL auto_increment, attribute_id int(11) NOT NULL, value varchar(100) NOT NULL, PRIMARY KEY (attribute_value_id), KEY idx_attribute_value_attribute_id (attribute_id) ); CREATE TABLE product ( product_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, description varchar(1000) NOT NULL, price decimal(10,2) NOT NULL, image varchar(150) default NULL, thumbnail varchar(150) default NULL, PRIMARY KEY (product_id), FULLTEXT KEY idx_ft_product_name_description (name,description) ); CREATE TABLE product_attribute ( product_id int(11) NOT NULL, attribute_value_id int(11) NOT NULL, PRIMARY KEY (product_id,attribute_value_id) ); I'm not getting how to store the price based on variable quantity. Please help me to create product and its related tables. my requirement is same as above reference link.

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >