Search Results

Search found 15376 results on 616 pages for 'mysql triggers'.

Page 170/616 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Get list of duplicate rows in MySql

    - by user347033
    Hi, i have a table like this ID nachname vorname 1 john doe 2 john doe 3 jim doe 4 Michael Knight I need a query that will return all the fields (select *) from the records that have the same nachname and vorname (in this case, records 1 and 2). Can anyone help me with this? Thanks

    Read the article

  • What is an index in MySQL?

    - by Eric
    http://i.imgur.com/JdsUK.jpg I created a table like the picture above. What are the "Indexes"? primary key? unique? It works well without setting indexes.. What do they do? why do I need them? Also, I set all String fields to TEXT because I didn't know how many characters I need. Is this a good idea? I don't see any difference. Thanks!

    Read the article

  • mysql union query

    - by Sergio
    The table that contains information about members has a structure like: id | fname | pic | status -------------------------------------------------- 1 | john | a.jpg | 1 2 | mike | b.jpg | 1 3 | any | c.jpg | 1 4 | jacky | d.jpg | 1 Table for list of friends looks like: myid | date | user ------------------------------- 1 | 01-01-2011 | 4 2 | 04-01-2011 | 3 I want to make a query that will as result print users from "friendlist" table that contains photos and names of that users from "members" table of both, myid (those who adding) and user (those who are added). That table in this example will look like: myid | myidname | myidpic | user | username | userpic | status ----------------------------------------------------------------------------------- 1 | john | a.jpg | 4 | jacky | d.jpg | 1 2 | mike | b.jpg | 3 | any | c.jpg | 1

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • MySql left join on several regs

    - by egidiocs
    Hi there! I have this table1 idproduct(PK) | date_to_go 1 2010-01-18 2 2010-02-01 3 2010-02-21 4 2010-02-03 and this other table2 that controls date_to_go updates id | idproduct(FK) | prev_date_to_go | date_to_go | update_date 1 1 2010-01-01 2010-01-05 2009-12-01 2 1 2010-01-05 2010-01-10 2009-12-20 3 1 2010-01-10 2010-01-18 2009-12-20 4 3 2010-01-20 2010-02-03 2010-01-05 So, in this example, for table1.idproduct #1 2010-01-18 is the actual date_to_go and 2010-01-01 (table2.prev_date_to_go, first reg) is the original date_to_go . using this query select v.idproduct, v.date_to_go, p.prev_date_to_go original_date_to_go from table1 v left join produto_datas p on p.idproduto = v.idproduto group by (v.idproduto) order by v.idproduto can I assume that original_date_to_go will be the first related reg of table2? idproduct | date_to_go | original_date_to_go 1 2010-01-18 2010-01-01 2 2010-02-01 NULL 3 2010-02-21 2010-01-20 4 2010-02-03 NULL

    Read the article

  • PHP mySQL - select unique value that not being used from dirrefent table

    - by apis17
    Updates : Please see below i have table: data +-----------------------+--------------+-----------+ | State | d_country | d_postcode| +-----------------------+--------------+-----------+ | State1 | Country1 | 1111 | | State2 | Country2 | 2222 | | State3 | Country3 | 3333 | | State4 | Country4 | 4444 | +-----------------------+--------------+-----------+ And another table: user +-----------------------+--------------+-----------+ | Name | u_country | u_postcode| +-----------------------+--------------+-----------+ | Name1 | Country3 | 3333 | | Name2 | Country5 | 5555 | | Name3 | | 6666 | | Name4 | Country6 | 6666 | | Name5 | Country6 | 6666 | +-----------------------+--------------+-----------+ What SQL should i use to: Determine the number (count) of country that are not listed on table data. For example u_postcode is not listed in d_postcode is 5555 and 6666. It will return 2. List down name and what country not available in table data yet. Updates I want to use grouping to filter postcode and make Name3 and Name4 as different rows. For example: +-----------------------+--------------+-----------+ | Name | u_country | u_postcode| +-----------------------+--------------+-----------+ | Name2 | Country5 | 5555 | | Name3 | | 6666 | | Name4 | Country6 | 6666 | +-----------------------+--------------+-----------+ Any possible idea?

    Read the article

  • Tricky MySQL Query for messaging system in Rails - Please Help

    - by ole_berlin
    Hi, I'm writing a facebook style messaging system for a Rails App and I'm having trouble selecting the Messages for the inbox (with will_paginate). The messages are organized in threads, in the inbox the most recent message of a thread will appear with a link to it's thread. The thread is organized via a parent_id 1-n relationship with itself. So far I'm using something like this: class Message < ActiveRecord::Base belongs_to :sender, :class_name => 'User', :foreign_key => "sender_id" belongs_to :recipient, :class_name => 'User', :foreign_key => "recipient_id" has_many :children, :class_name => "Message", :foreign_key => "parent_id" belongs_to :thread, :class_name => "Message", :foreign_key => "parent_id" end class MessagesController < ApplicationController def inbox @messages = current_user.received_messages.paginate :page => params[:page], :per_page => 10, :order => "created_at DESC" end end That gives me all the messages, but for one thread the thread itself and the most recent message will appear (and not only the most recent message). I can also not use the GROUP BY clause, because for the thread itself (the parent so to say) the parent_id = nil of course. Anyone got an idea on how to solve this in an elegant way? I already thought about adding the parent_id to the parent itself and then group by parent_id, but I'm not sure if that works. Thanks

    Read the article

  • mysql dynamic cursor

    - by machaa
    Here is the procedure I wrote- Cursors c1 & c2. c2 is inside c1, I tried declaring c2 below c1 (outside the c1 cursor) but then I is NOT taking the updated value :( Any suggestions to make it working would be helpful, Thanks create table t1(i int); create table t2(i int, j int); insert into t1(i) values(1), (2), (3), (4), (5); insert into t2(i, j) values(1, 6), (2, 7), (3, 8), (4, 9), (5, 10); delimiter $ CREATE PROCEDURE p1() BEGIN DECLARE I INT; DECLARE J INT; DECLARE done INT DEFAULT 0; DECLARE c1 CURSOR FOR SELECT i FROM t1; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN c1; REPEAT FETCH c1 INTO I; IF NOT done THEN select I; DECLARE c2 CURSOR FOR SELECT j FROM t2 WHERE i = I; OPEN c2; REPEAT FETCH c2 into J; IF NOT done THEN SELECT J; END IF; UNTIL done END REPEAT; CLOSE c2; set done = 0; END IF; UNTIL done END REPEAT; CLOSE c1; END$ delimiter ;

    Read the article

  • MySql too many connections

    - by MichaelMcCabe
    I hate to bring up a question which is widely asked on the web, but I cant seem to solve it. I started a project a while back and after a month of testing, I hit a "Too many connections" error. I looked into it, and "Solved" it by increasing the max_connections. This then worked. Since then more and more people started to use it, and it hit again. When I am the only user on the site, i type "show processlist" and it comes up with about 50 connections which are still open (saying "Sleep" in the command). Now, I dont know enough to speculate why these are open, but in my code I tripple checked and every connection I open, I close. ie. public int getSiteIdFromName(String name, String company)throws DataAccessException,java.sql.SQLException{ Connection conn = this.getSession().connection(); Statement smt = conn.createStatement(); ResultSet rs=null; String query="SELECT id FROM site WHERE name='"+name+"' and company_id='"+company+"'"; rs=smt.executeQuery(query); rs.next(); int id=rs.getInt("id"); rs.close(); smt.close(); conn.close(); return id; } Every time I do something else on the site, another load of connections are opened and not closed. Is there something wrong with my code? and if not, what could be the problem?

    Read the article

  • mysql - joining three tables with HAVING

    - by Qiao
    I have table: id name type where "type" is 1 or 2 I need to join this table with two other. Rows with "type = 1" should be joined with first table, and =2 with second. Something like SELECT * FROM tbl INNER JOIN tbl_1 ON tbl.name = tbl_1.name HAVING tbl.type = 1 INNER JOIN tbl_2 ON tbl.name = tbl_2.name HAVING tbl.type = 2 But it does not working. How it can be implemented?

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • Problems getting foreign keys working in MySQL

    - by thehuby
    I've been trying to get a delete to cascade and it just doesn't seem to work. I'm sure I am missing something obvious, can anyone help me find it? I would expect a delete on the 'articles' table to trigger a delete on the corresponding rows in the 'article_section_lt' table. CREATE TABLE articles ( id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL UNIQUE, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT, date DATE NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP )ENGINE=INNODB; CREATE TABLE article_sections ( /* blog, news etc */ id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT NOT NULL DEFAULT "" )ENGINE=INNODB; CREATE TABLE article_section_lt ( fk_article_id INTEGER UNSIGNED NOT NULL REFERENCES articles(id) ON DELETE CASCADE, fk_article_section_id INTEGER UNSIGNED NOT NULL )ENGINE=INNODB;

    Read the article

  • MySQL - getting SUM of MAX results from 2 tables

    - by SODA
    Hi, Here's my problem: I have 2 identical tables (past month data, current month data) - data_2010_03, data_2010_04: Content_type (VARCHAR), content_id (INT), month_count (INT), pubDate (DATETIME) Data in month_count is updated hourly, so for each combination of content_type and content_id we insert new row, where value of month_count is incrementally updated. Now I try something like this: SELECT MAX(t1.month_count) AS max_1, MAX(t2.month_count) AS max_2, SUM(max_1 + max_2) AS result, t1.content_type, t1.content_id FROM data_2010_03 AS t1 JOIN data_2010_04 AS t2 ON t1.content_type = t2.content_type AND t1.content_id = t2.content_id WHERE t2.pubDate < '2010-04-08' AND t1.content_type = 'video' GROUP BY t1.content_id ORDER BY result desc, max_1 desc, max_2 desc LIMIT 0,10 I get an error "Unknown column 'max_1' in 'field list'. Please help.

    Read the article

  • Encoding issue with form and HTML Purifier / MySQL

    - by Andrew Heath
    Driving me nuts... Page with form is encoded as Unicode (UTF-8) via: <meta http-equiv="content-type" content="text/html; charset=utf-8"> entry column in database is text utf8_unicode_ci copying text from a Word document with " in it, like this: “1922.” is insta-fail and ends up in the database as â??1922.â?? (typing new data into the form, including " works fine... it's cut and pasting from Word...) PHP steps behind the scenes are: grab value from POST run through HTML Purifier default settings run through mysql_real_escape_string insert query into dbase Help?

    Read the article

  • MySQL query killing my server

    - by Webnet
    Looking at this query there's got to be something bogging it down that I'm not noticing. I ran it for 7 minutes and it only updated 2 rows. //set product count for makes $tru->query->run(array( 'name' => 'get-make-list', 'sql' => 'SELECT id, name FROM vehicle_make', 'connection' => 'core' )); while($tempMake = $tru->query->getArray('get-make-list')) { $tru->query->run(array( 'name' => 'update-product-count', 'sql' => 'UPDATE vehicle_make SET product_count = ( SELECT COUNT(product_id) FROM taxonomy_master WHERE v_id IN ( SELECT id FROM vehicle_catalog WHERE make_id = '.$tempMake['id'].' ) ) WHERE id = '.$tempMake['id'], 'connection' => 'core' )); } I'm sure this query can be optimized to perform better, but I can't think of how to do it. vehicle_make = 45 rows taxonomy_master = 11,223 rows vehicle_catalog = 5,108 rows All tables have appropriate indexes

    Read the article

  • adding DATE_SUB to query to return range of values in mysql

    - by ian
    Here is my original query: $query = mysql_query("SELECT s.*, UNIX_TIMESTAMP(`date`) AS `date`, f.userid as favoritehash FROM songs s LEFT JOIN favorites f ON f.favorite = s.id AND f.userid = '$userhash' ORDER BY s.date DESC"); This returns all the songs in my DB and then joins data from my favorites table so I can display wich items a return visitors has clicked as favorites or not. Visitors are recognized by a unique has storred in a cookie and in the favorites table. I need to alter this query so that I can get just the last months worth of songs. Below is my attempt at adding DATE_SUB to my query: $query = mysql_query("SELECT s.*, UNIX_TIMESTAMP(`date`) AS `date`, f.userid as favoritehash FROM songs s WHERE `date` >= DATE_SUB( NOW( ) , INTERVAL 1 MONTH ) LEFT JOIN favorites f ON f.favorite = s.id AND f.userid = '$userhash' ORDER BY s.date DESC"); Suggestions?

    Read the article

  • Caching Mysql database for better performance

    - by kobey
    Hi, I'm using Amazon cloud and I've performance issue since the HDD is not located on my machine. My database is small (~500MB) and I can afford to keep it all in my RAM. I do not want to keep queries in my RAM, i need all the tables there. How can i do it? Thanks, Koby P.S. I'm using ubuntu server...

    Read the article

  • mysql 2 primary key onone table

    - by Bharanikumar
    CREATE TABLE Orders -> ( -> ID SMALLINT UNSIGNED NOT NULL, -> ModelID SMALLINT UNSIGNED NOT NULL, -> Descrip VARCHAR(40), -> PRIMARY KEY (ID, ModelID) -> ); Basically May i know ... Shall we create the two primary key on one table... Is it correct... Bcoz as per sql law,,, We can create N number of unque key in one table, and only one primary key only is the LAW know... Then how can my system allowing to create multiple primary key ? Please advise .... what is the general rule

    Read the article

  • MySQL - Select all as one string

    - by poru
    How could I select all as one string seperated with a ,? Example table: Table Stringtest Examplestring2 Anotherstring Otherstring And the selected result should be Stringtest,Examplestring2,Anotherstring,Otherstring.

    Read the article

  • What does this MySQL statement do?

    - by user198729
    INSERT IGNORE INTO `PREFIX_tab_lang` (`id_tab`, `id_lang`, `name`) (SELECT `id_tab`, id_lang, (SELECT tl.`name` FROM `PREFIX_tab_lang` tl WHERE tl.`id_lang` = (SELECT c.`value` FROM `PREFIX_configuration` c WHERE c.`name` = 'PS_LANG_DEFAULT' LIMIT 1) AND tl.`id_tab`=`PREFIX_tab`.`id_tab`) FROM `PREFIX_lang` CROSS JOIN `PREFIX_tab`); It's from an opensource project,and no documentation available. Especially,what does cross-join mean? I've only used join/left join .

    Read the article

  • Problem with WHERE columnName = Data in MySQL query in C#

    - by Ryan Sullivan
    I have a C# webservice on a Windows Server that I am interfacing with on a linux server with PHP. The PHP grabs information from the database and then the page offers a "more information" button which then calls the webservice and passes in the name field of the record as a parameter. So i am using a WHERE statement in my query so I only pull the extra fields for that record. I am getting the error: System.Data.SqlClient.SqlException:Invalid column name '42' Where 42 is the value from the name field from the database. my query is string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; I do not know if it is a problem with my query or something is wrong with the database, but here is the rest of my code for reference. NOTE: this all works perfectly when I grab all of the records, but I only want to grab the record that I ask my webservice for. public class ktvService : System.Web.Services.WebService { [WebMethod] public string moreInfo(string show) { string connectionStr = "MyConnectionString"; string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; SqlConnection conn = new SqlConnection(connectionStr); SqlDataAdapter da = new SqlDataAdapter(selectStr, conn); DataSet ds = new DataSet(); da.Fill(ds, "tableName"); DataTable dt = ds.Tables["tableName"]; DataRow theShow = dt.Rows[0]; string response = "Name: " + theShow["name"].ToString() + "Cast: " + theShow["castNotes"].ToString() + " Trivia: " + theShow["triviaNotes"].ToString(); return response; } }

    Read the article

  • auto_increment in MySQL - can I omit it?

    - by kees-kist
    I've noticed that PHPmyAdmin creates the following SQL for table creation: CREATE TABLE something ( ... ) auto_increment=1; When I write a database creation script I don't use the auto_increment bit. From reading related questions here I understand that it determines the starting value for auto_increment values. But it is good practice to reset it to 1, or should I just leave it out of the SQL so that the default is used?

    Read the article

  • Getting mysql row that doesn't conflict with another row

    - by user939951
    I have two tables that link together through an id one is "submit_moderate" and one is "submit_post" The "submit_moderate" table looks like this id moderated_by post 1 James 60 2 Alice 32 3 Tim 18 4 Michael 60 Im using a simple query to get data from the "submit_post" table according to the "submit_moderate" table. $get_posts = mysql_query("SELECT * FROM submit_moderate WHERE moderated_by!='$user'"); $user is the person who is signed in. Now my problem is when I run this query, with the user 'Michael' it will retrieve this 1 James 60 2 Alice 32 3 Tim 18 Now technically this is correct however I don't want to retrieve the first row because 60 is associated with Michael as well as James. Basically I don't want to retrieve that value '60'. I know why this is happening however I can't figure out how to do this. I appreciate any hints or advice I can get.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >