Search Results

Search found 130 results on 6 pages for 'collate'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • problem in displays data in one page

    - by user318068
    hi ,,,,, I have a problem in the following code ... The following code works as follows displays the invites for each member so that if he had five invite from supposed to be displayed all on one page But before you code that does not function Proper image is the only display one invite on the page and until the approval or rejection of the invitation displays the invite the other .... But this is not my want to offer all on one page I wish I could solve the problem and I can view all calls in one page I think that the problem is in the order code I think that the problem is in the order code my code : <?php session_start(); if (!isset($_SESSION['user_id'])) { header("Location: login.php"); } $id=$_SESSION['user_id']; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <center> <?php include("connect.php"); $sql =mysql_query("select * from ninvite where recieverMemberID ='$id' and viwed= '0'"); $num =mysql_num_rows($sql); echo $num ; if ($num>0) { while($row=mysql_fetch_array($sql)) { $sender=$row['SenderMemberID']; $room=$row['RoomID']; $sql =mysql_query("select MemberName from members where MemberID ='$sender' "); $sql1 =mysql_query("select RoomName from rooms where RoomID ='$room' "); while($row=mysql_fetch_array($sql)) {$mem =$row['MemberName']; } while($rows=mysql_fetch_array($sql1)) { $Ro =$rows['RoomName']; ?> <form action="join.php" method="post"> <label> </label> <br/> <label> <?php echo " you have invite from $mem to join $Ro"; ?> </label> <br/><br/> <label>accept</label> <input name="radio1" type="radio" value="accpet" /> <label>reject</label> <input name="radio1" type="radio" value="Reject" /><br/> <input type="submit" name="submit" value="done" /> </form> <?php } } } ?> </center> </body> </html> thanks alot. my SQl -- phpMyAdmin SQL Dump -- version 3.2.4 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 07, 2010 at 12:50 ? -- Server version: 5.1.41 -- PHP Version: 5.3.1 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT /; /!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS /; /!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION /; /!40101 SET NAMES utf8 */; -- -- Database: tr -- -- Table structure for table joinroom CREATE TABLE IF NOT EXISTS joinroom ( MemberID int(10) NOT NULL, RoomID int(10) NOT NULL, PRIMARY KEY (MemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; -- -- Dumping data for table joinroom INSERT INTO joinroom (MemberID, RoomID) VALUES (28, 1); -- -- Table structure for table members CREATE TABLE IF NOT EXISTS members ( MemberID int(10) unsigned NOT NULL AUTO_INCREMENT, MemberName varchar(20) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberPass varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberEmail varchar(30) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberLocation text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberImg text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (MemberID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=34 ; -- -- Dumping data for table members INSERT INTO members (MemberID, MemberName, MemberPass, MemberEmail, MemberLocation, MemberImg) VALUES (28, 'marwa', '1234', '[email protected]', 'mmmmmm', 'dddddddddd'), (29, 'nora', '1234', '[email protected]', 'fffffffffffgg', 'gggggggggggggg'), (30, 'soso', '1234', '[email protected]', 'ffffffff', 'kkkkkkkkkkkkkkkkkk'), (31, 'gege', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'uuuuuuuuuuuuuuuuu'), (32, 'nono', '1234', '[email protected]', 'ggggggggggggaaaaa', 'aaaaaaaaaaaaaaa'), (33, 'nda', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'ooooooooooooooo'); -- -- Table structure for table ninvite CREATE TABLE IF NOT EXISTS ninvite ( SenderMemberID int(11) NOT NULL AUTO_INCREMENT, recieverMemberID varchar(30) NOT NULL, RoomID int(11) NOT NULL, viwed int(11) NOT NULL, PRIMARY KEY (SenderMemberID,recieverMemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=33 ; -- -- Dumping data for table ninvite INSERT INTO ninvite (SenderMemberID, recieverMemberID, RoomID, viwed) VALUES (28, '33', 1, 0), (28, '32', 1, 0), (28, '31', 1, 0); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT /; /!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS /; /!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

    Read the article

  • Mysql german accents not-sensitive search in full-text searches

    - by lukaszsadowski
    Let`s have a example hotels table: CREATE TABLE `hotels` ( `HotelNo` varchar(4) character set latin1 NOT NULL default '0000', `Hotel` varchar(80) character set latin1 NOT NULL default '', `City` varchar(100) character set latin1 default NULL, `CityFR` varchar(100) character set latin1 default NULL, `Region` varchar(50) character set latin1 default NULL, `RegionFR` varchar(100) character set latin1 default NULL, `Country` varchar(50) character set latin1 default NULL, `CountryFR` varchar(50) character set latin1 default NULL, `HotelText` text character set latin1, `HotelTextFR` text character set latin1, `tagsforsearch` text character set latin1, `tagsforsearchFR` text character set latin1, PRIMARY KEY (`HotelNo`), FULLTEXT KEY `fulltextHotelSearch` (`HotelNo`,`Hotel`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`,`HotelText`,`HotelTextFR`,`tagsforsearch`,`tagsforsearchFR`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci; In this table for example we have only one hotel with Region name = "Graubünden" (please note umlaut ü character) And now I want to achieve same search match for phrases: 'graubunden' and 'graubünden' This is simple with use of MySql built in collations in regular searches as follows: SELECT * FROM `hotels` WHERE `Region` LIKE CONVERT(_utf8 '%graubunden%' USING latin1) COLLATE latin1_german1_ci This works fine for 'graubunden' and 'graubünden' and as a result I receive proper result, but problem is when we make MySQL full text search Whats wrong with this SQL statement?: SELECT * FROM hotels WHERE MATCH (`HotelNo`,`Hotel`,`Address`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`, `HotelText`, `HotelTextFR`, `tagsforsearch`, `tagsforsearchFR`) AGAINST( CONVERT('+graubunden' USING latin1) COLLATE latin1_german1_ci IN BOOLEAN MODE) ORDER BY Country ASC, Region ASC, City ASC This doesn`t return any result. Any ideas where the dog is buried ?

    Read the article

  • Two n x m relationships with the same table in mysql

    - by Christian
    I want to create a database in which there's an n x m relationship between the table drug and the table article and an n x m relationship between the table target and the table article. I get the error: Cannot delete or update a parent row: a foreign key constraint fails What do I have to change in my code? DROP TABLE IF EXISTS `textmine`.`article`; CREATE TABLE `textmine`.`article` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Pubmed ID', `abstract` blob NOT NULL, `authors` blob NOT NULL, `journal` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`drugs`; CREATE TABLE `textmine`.`drugs` ( `id` int(10) unsigned NOT NULL COMMENT 'This ID is taken from the biosemantics dictionary', `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`targets`; CREATE TABLE `textmine`.`targets` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`containstarget`; CREATE TABLE `textmine`.`containstarget` ( `targetid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `target` (`targetid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `target` FOREIGN KEY (`targetid`) REFERENCES `targets` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`contiansdrug`; CREATE TABLE `textmine`.`contiansdrug` ( `drugid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `drug` (`drugid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `drug` FOREIGN KEY (`drugid`) REFERENCES `drugs` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    Read the article

  • Masspay and MySql

    - by Mike
    Hi, I am testing Paypal's masspay using their 'MassPay NVP example' and I having difficulty trying to amend the code so inputs data from my MySql database. Basically I have user table in MySql which contains email address, status of payment (paid,unpaid) and balance. CREATE TABLE `users` ( `user_id` int(10) unsigned NOT NULL auto_increment, `email` varchar(100) collate latin1_general_ci NOT NULL, `status` enum('unpaid','paid') collate latin1_general_ci NOT NULL default 'unpaid', `balance` int(10) NOT NULL default '0', PRIMARY KEY (`user_id`) ) ENGINE=MyISAM AUTO_INCREMENT=6 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci Data : 1 [email protected] paid 100 2 [email protected] unpaid 11 3 [email protected] unpaid 20 4 [email protected] unpaid 1 5 [email protected] unpaid 20 6 [email protected] unpaid 15 I then have created a query which selects users with an unpaid balance of $10 and above : $conn = db_connect(); $query=$conn->query("SELECT * from users WHERE balance >='10' AND status = ('unpaid')"); What I would like to is for each record returned from the query for it to populate the code below: Now the code which I believe I need to amend is as follows: for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount",); $receiversArray[$i] = $receiverData; } However I just can't get it to work, I have tried using mysqli_fetch_array and then replaced "[email protected]" with $row['email'] and "example_amount" with row['balance'] in various methods of coding but it doesn't work. Also I need it to loop to however many rows that were retrieved from the query as <3 in the for loop above. So the end result I am looking for is for the $nvpStr string to pass with something like this: $nvpStr = "&EMAILSUBJECT=test&RECEIVERTYPE=EmailAddress&CURRENCYCODE=USD&[email protected]&L_Amt=11&[email protected]&L_Amt=11&[email protected]&L_Amt=20&[email protected]&L_Amt=20&[email protected]&L_Amt=15"; Thanks

    Read the article

  • SQL SERVER – Effect of Collation on Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    Collation is a very important concept but often ignored. I have often seen developers either not understanding this or ignored it – this is plain wrong. In simple word we can say Collation is the language or interpreting done by SQL Server. Well, in today’s SQL in Sixty Seconds we are going to observe how collation affects the resultset. Today’s blog post is inspired from my earlier blog post SQL SERVER – Effect of Case Sensitive Collation on Resultset. I strongly encourage you to read this earlier blog post for sample code as well additional explanation related to the concept shared in today’s SQL in Sixty Seconds. Here is the code used in the video. USE TempDB GO -- Sample Data Building CREATE TABLE ColTable (Col1 VARCHAR(15) COLLATE Latin1_General_CI_AS, Col2 VARCHAR(14) COLLATE Latin1_General_CS_AS) ; INSERT ColTable(Col1, Col2) VALUES ('Apple','Apple'), ('apple','apple'), ('pineapple','pineapple'), ('Pineapple','Pineapple'); GO -- Retrieve Data SELECT * FROM ColTable GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col1 GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col2 GO -- Clean up DROP TABLE ColTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Effect of Case Sensitive Collation on Resultset Example of Width Sensitive and Width Insensitive Collation Collation and Collation Sensitivity – Quiz – Puzzle – 6 of 31 Change Collation of Database Column – T-SQL Script Find Collation of Database and Table Column Using T-SQL Default Collation of SQL Server 2008 Cannot resolve collation conflict for equal to operation If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • Problem with joining to an empty table

    - by Imran Omar Bukhsh
    I use the following query: select * from A LEFT JOIN B on ( A.t_id != B.t_id) to get all the records in A that are not in B. The results are fine except when table B is completely empty, but then I do not get any records, even from table A. Later It wont work yet! CREATE TABLE IF NOT EXISTS T1 ( id int(11) unsigned NOT NULL AUTO_INCREMENT, title varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL, t_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Dumping data for table T1 INSERT INTO T1 (id, title, t_id) VALUES (1, 'apple', 1), (2, 'orange', 2); -- -- Table structure for table T2 CREATE TABLE IF NOT EXISTS T2 ( id int(11) NOT NULL AUTO_INCREMENT, title varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL, t_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Dumping data for table T2 INSERT INTO T2 (id, title, t_id) VALUES (1, 'dad', 2); Now I want to get all records in T1 that do not have a corresponding records in T2 I try SELECT * FROM T1 LEFT OUTER JOIN T2 ON T1.t_id != T2.t_id and it won't work

    Read the article

  • Fun with SQL Server Profiler trace files and PowerShell

    Running Profiler traces against multiple servers becomes a painful process when it’s time to collate and filter all that data. It would be time-consuming, frustrating and messy if Laerte hadn’t written this handy PowerShell script (complete with examples) to help you out. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • More useful Sql Server Serivce Broker Queries

    - by ChrisD
    SELECT 'Checking Broker Service Status...' IF (select Top 1 is_broker_enabled from sys.databases where name = 'NWMESSAGE')=1     SELECT ' Broker Service IS Enabled'  -- Should return a 1. ELSE     SELECT '** Broker Service IS DISABLED ***' /* If Is_Broker_enabled returns 0, uncomment and run this code ALTER DATABASE NWMESSAGE SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO Alter Database NWMESSAGE Set enable_broker GO ALTER DATABASE NWDataChannel SET MULTI_USER GO */ SELECT 'Checking For Disabled Queues....' -- ensure the queues are enabled --  0 indicates the queue is disabled. Select '** Receive Queue Disabled: '+name from sys.service_queues where is_receive_enabled = 0 --select [name], is_receive_enabled from sys.service_queues; /*If the queue is disabled, to enable it alter queue QUEUENAME with status=on; – replace QUEUENAME with the name of your queue */ -- Get General information about the queues --select * from sys.service_queues -- Get the message counts in each queue SELECT 'Checking Message Count for each Queue...' select q.name, p.rows from sys.objects as o join sys.partitions as p on p.object_id = o.object_id join sys.objects as q on o.parent_object_id = q.object_id join sys.service_queues sq on sq.name = q.name where p.index_id = 1 -- Ensure all the queue activiation sprocs are present SELECT 'Checking for Activation Stored Procedures....' SELECT  '** Missing Procedure:  '+q.name  From sys.service_queues q Where NOT Exists(Select * from sysobjects where xtype='p' and name='activation_'+q.name) and q.activation_procedure is not null DECLARE @sprocs Table (Name Varchar(2000)) Insert into @sprocs Values ('Echo') Insert into @sprocs Values ('HTTP_POST') Insert into @sprocs Values ('InitializeRecipients') Insert into @sprocs Values ('sp_EnableRecipient') Insert into @sprocs Values ('sp_ProcessReceivedMessage') Insert into @sprocs Values ('sp_SendXmlMessage') SELECT 'Checking for required stored procedures...' SELECT  '** Missing Procedure:  '+s.name  From @sprocs s Where NOT Exists(Select * from sysobjects where xtype='p' and name=s.name) GO -- Check the services Select 'Checking Recipient Message Services...' Select '** Missing Message Service:' + r.RecipientName +'MessageService' From Recipient r Where not exists (Select * from sys.services s where  s.name  COLLATE SQL_Latin1_General_CP1_CI_AS= r.RecipientName+'MessageService') DECLARE @svcs Table (Name Varchar(2000)) Insert into @svcs Values ('XmlMessageSendingService') SELECT  '** Missing Service:  '+s.name  From @svcs s Where NOT Exists(Select * from sys.services where name=s.name COLLATE SQL_Latin1_General_CP1_CI_AS) GO /*** To Test a message send Run: sp_SendXmlMessage  'TSQLTEST', 'CommerceEngine','<Root><Text>Test</Text></Root>' */ Select CAST(message_body as XML) as xml, * From XmlMessageSendingQueue /*** clean out all queues declare @handle uniqueidentifier declare conv cursor for   select conversation_handle from sys.conversation_endpoints open conv fetch next from conv into @handle while @@FETCH_STATUS = 0 Begin    END Conversation @handle with cleanup    fetch next from conv into @handle End close conv deallocate conv ***********************

    Read the article

  • mysql.proc has gone corrupt. How can I fix it?

    - by Metalcoder
    I have a server running Debian 5.0, and MySQL. Suddendly, MySQL stopped working, and after many attempts to fix it, I decided to reinstall it. I installed MySQL 5.1.63, and when started it goes to safe mode. I made some typing, and when I executed mysql_upgrade as root, it complained: ... Running 'mysql_fix_privilege_tables'... ERROR 1548 (HY000) at line 1111: Cannot load from mysql.proc. The table is probably corrupted ERROR 1064 (42000) at line 1112: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'sqlstate 'HY000' set message_text='Unexpected content found in the performance_s' at line 1 ERROR 1548 (HY000) at line 1125: Cannot load from mysql.proc. The table is probably corrupted FATAL ERROR: Upgrade failed I checked the mysql.proc table, and it's comment column was slightly different from my backup. -- My backup says: `comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '', -- But it were: `comment` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, So, I restored my mysql database backup, and now they all match, but mysql_upgrade still trigger the same errors. I also tried do check and repair the mysql.proc table, but got no success.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Download binary file From SQL Server 2000

    - by kareemsaad
    I inserted binary files (images, PDF, videos..) and I want to retrieve this file to download it. I used generic handler page as this public void ProcessRequest (HttpContext context) { using (System.Data.SqlClient.SqlConnection con = Connection.GetConnection()) { String Sql = "Select BinaryData From ProductsDownload Where Product_Id = @Product_Id"; SqlCommand com = new SqlCommand(Sql, con); com.CommandType = System.Data.CommandType.Text; com.Parameters.Add(Parameter.NewInt("@Product_Id", context.Request.QueryString["Product_Id"].ToString())); SqlDataReader dr = com.ExecuteReader(); if (dr.Read() && dr != null) { Byte[] bytes; bytes = Encoding.UTF8.GetBytes(String.Empty); bytes = (Byte[])dr["BinaryData"]; context.Response.BinaryWrite(bytes); dr.Close(); } } } and this is my table CREATE TABLE [ProductsDownload] ( [ID] [bigint] IDENTITY (1, 1) NOT NULL , [Product_Id] [int] NULL , [Type_Id] [int] NULL , [Name] [nvarchar] (200) COLLATE Arabic_CI_AS NULL , [MIME] [varchar] (50) COLLATE Arabic_CI_AS NULL , [BinaryData] [varbinary] (4000) NULL , [Description] [nvarchar] (500) COLLATE Arabic_CI_AS NULL , [Add_Date] [datetime] NULL , CONSTRAINT [PK_ProductsDownload] PRIMARY KEY CLUSTERED ( [ID] ) ON [PRIMARY] , CONSTRAINT [FK_ProductsDownload_DownloadTypes] FOREIGN KEY ( [Type_Id] ) REFERENCES [DownloadTypes] ( [ID] ) ON DELETE CASCADE ON UPDATE CASCADE , CONSTRAINT [FK_ProductsDownload_Product] FOREIGN KEY ( [Product_Id] ) REFERENCES [Product] ( [Product_Id] ) ON DELETE CASCADE ON UPDATE CASCADE ) ON [PRIMARY] GO And use data list has label for file name and button to download file as <asp:DataList ID="DataList5" runat="server" DataSource='<%#GetData(Convert.ToString(Eval("Product_Id")))%>' RepeatColumns="1" RepeatLayout="Flow"> <ItemTemplate> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td class="spc_tab_hed_bg spc_hed_txt lm5 tm2 bm3"> <asp:Label ID="LblType" runat="server" Text='<%# Eval("TypeName", "{0}") %>'></asp:Label> </td> <td width="380" class="spc_tab_hed_bg"> &nbsp; </td> </tr> <tr> <td align="left" class="lm5 tm2 bm3"> <asp:Label ID="LblData" runat="server" Text='<%# Eval("Name", "{0}") %>'></asp:Label> </td> <td align="center" class=" tm2 bm3"> <a href='<%# "DownloadFile.aspx?Product_Id=" + DataBinder.Eval(Container.DataItem,"Product_Id") %>' > <img src="images/downloads_ht.jpg" width="11" height="11" border="0" /> </a> <%--<asp:ImageButton ID="ImageButton1" ImageUrl="images/downloads_ht.jpg" runat="server" OnClick="ImageButton1_Click1" />--%> </td> </tr> </table> </ItemTemplate> </asp:DataList> I tried more to solve this problem but I cannot please if any one has solve for this proplem please sent me thank you kareem saad programmer MCTS,MCPD Toshiba Company Egypt

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 3)

    - by Sadequl Hussain
    Continuing from Part 2 of the Database Migration Checklist series: Step 10: Full-text catalogs and full-text indexing This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems. Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points. First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way: Msg 9954, Level 16, State 2, Line 1 The Full-Text Service (msftesql) is disabled. The system administrator must enable this service. This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager. Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it. Msg 7624, Level 16, State 1, Line 1 Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog. A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server. Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination. Step 11: Database collation considerations Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command: ALTER DATABASE database_name COLLATE collation_name You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected. And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following: Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘. That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other. Continues…

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • How to write Sql or LinqToSql for this scenario?

    - by Mike108
    How to write Sql or LinqToSql for this scenario? A table has the following data: Id UserName Price Date Status 1 Mike 2 2010-4-25 0:00:00 Success 2 Mike 3 2010-4-25 0:00:00 Fail 3 Mike 2 2010-4-25 0:00:00 Success 4 Lily 5 2010-4-25 0:00:00 Success 5 Mike 1 2010-4-25 0:00:00 Fail 6 Lily 5 2010-4-25 0:00:00 Success 7 Mike 2 2010-4-26 0:00:00 Success 8 Lily 5 2010-4-26 0:00:00 Fail 9 Lily 2 2010-4-26 0:00:00 Success 10 Lily 1 2010-4-26 0:00:00 Fail I want to get the summary result from the data, the result should be: UserName Date TotalPrice TotalRecord SuccessRecord FailRecord Mike 2010-04-25 8 4 2 2 Lily 2010-04-25 10 2 2 0 Mike 2010-04-26 2 1 1 0 Lily 2010-04-26 8 3 1 2 The TotalPrice is the sum(Price) groupby UserName and Date The TotalRecord is the count(*) groupby UserName and Date The SuccessRecord is the count(*) groupby UserName and Date where Status='Success' The FailRecord is the count(*) groupby UserName and Date where Status='Fail' The TotalRecord = SuccessRecord + FailRecord The sql server 2005 database script is: /****** Object: Table [dbo].[Pay] Script Date: 04/28/2010 22:23:42 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Pay]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[Pay]( [Id] [int] IDENTITY(1,1) NOT NULL, [UserName] [nvarchar](50) COLLATE Chinese_PRC_CI_AS NULL, [Price] [int] NULL, [Date] [datetime] NULL, [Status] [nvarchar](50) COLLATE Chinese_PRC_CI_AS NULL, CONSTRAINT [PK_Pay] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) END GO SET IDENTITY_INSERT [dbo].[Pay] ON INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (1, N'Mike', 2, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (2, N'Mike', 3, CAST(0x00009D6300000000 AS DateTime), N'Fail') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (3, N'Mike', 2, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (4, N'Lily', 5, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (5, N'Mike', 1, CAST(0x00009D6300000000 AS DateTime), N'Fail') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (6, N'Lily', 5, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (7, N'Mike', 2, CAST(0x00009D6400000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (8, N'Lily', 5, CAST(0x00009D6400000000 AS DateTime), N'Fail') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (9, N'Lily', 2, CAST(0x00009D6400000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (10, N'Lily', 1, CAST(0x00009D6400000000 AS DateTime), N'Fail') SET IDENTITY_INSERT [dbo].[Pay] OFF

    Read the article

  • How to query MySQL for exact length and exact UTF-8 characters

    - by oskarae
    I have table with words dictionary in my language (latvian). CREATE TABLE words ( value varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; And let's say it has 3 words inside: INSERT INTO words (value) VALUES ('teja'); INSERT INTO words (value) VALUES ('vejš'); INSERT INTO words (value) VALUES ('feja'); What I want to do is I want to find all words that is exactly 4 characters long and where second character is 'e' and third character is 'j' For me it feels that correct query would be: SELECT * FROM words WHERE value LIKE '_ej_'; But problem with this query is that it returs not 2 entries ('teja','vejš') but all three. As I understand it is because internally MySQL converts strings to some ASCII representation? Then there is BINARY addition possible for LIKE SELECT * FROM words WHERE value LIKE BINARY '_ej_'; But this also does not return 2 entries ('teja','vejš') but only one ('teja'). I believe this has something to do with UTF-8 2 bytes for non ASCII chars? So question: What MySQL query would return my exact two words ('teja','vejš')? Thank you in advance

    Read the article

  • How to reference using Entity Framework and Asp.Net Mvc 2

    - by Picflight
    Tables CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Email] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BirthDate] [smalldatetime] NULL, [CountryId] [int] NULL, CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED ([UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TeamMember]( [UserId] [int] NOT NULL, [TeamMemberUserId] [int] NOT NULL, [CreateDate] [smalldatetime] NOT NULL CONSTRAINT [DF_TeamMember_CreateDate] DEFAULT (getdate()), CONSTRAINT [PK_TeamMember] PRIMARY KEY CLUSTERED ([UserId] ASC, [TeamMemberUserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] dbo.TeamMember has both UserId and TeamMemberUserId as the index key. My goal is to show a list of Users on my View. In the list I want to flag, or highlight the Users that are Team Members of the LoggedIn user. My ViewModel public class UserViewModel { public int UserId { get; private set; } public string UserName { get; private set; } public bool HighLight { get; private set; } public UserViewModel(Users users, bool highlight) { this.UserId = users.UserId; this.UserName = users.UserName; this.HighLight = highlight; } } View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcPaging.IPagedList<MyProject.Mvc.Models.UserViewModel>>" %> <% foreach (var item in Model) { %> <%= item.UserId %> <%= item.UserName %> <%if (item.HighLight) { %> Team Member <% } else { %> Not Team Member <% } %> How do I toggle the TeamMember or Not If I add dbo.TeamMember to the EDM, there are no relationships on this table, how will I wire it to Users object? So I am comparing the LoggedIn UserId with this list(SELECT TeamMemberUserId FROM TeamMember WHERE UserId = @LoggedInUserId)

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • MySQL query problem

    - by Luke
    Ok, I have the following problem. I have two InnoDB tables: 'places' and 'events'. One place can have many events, but event can be created without entering place. In this case event's foreign key is NULL. Simplified structure of the tables looks as follows: CREATE TABLE IF NOT EXISTS `events` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, `places_id` int(9) unsigned DEFAULT NULL, PRIMARY KEY (`id`), KEY `fk_events_places` (`places_id`), ) ENGINE=InnoDB; ALTER TABLE `events` ADD CONSTRAINT `events_ibfk_1` FOREIGN KEY (`places_id`) REFERENCES `places` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS `places` ( `id` int(9) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), ) ENGINE=InnoDB; Question is, how to construct query which contains name of the event and name of the corresponding place (or no value, in case there is no place assigned?). I am able to do it with two queries, but then I am visibly separating events which have place assigned from the ones that are without place. Help really appreciated.

    Read the article

  • Social Networking & Network Affiliations

    - by Code Sherpa
    Hi. I am in the process of planning a database for a social networking project and stumbled upon this url which is a (crude) reverse engineered guess at facebook's schema: http://www.flickr.com/photos/ikhnaton2/533233247/ What is of interest to me is the notion of "Affiliations" and I am trying to fully understand how they work, technically speaking. Where I am somewhat confused is the NetworkID column in the FacebookGroups", "FacebookEvent", and "Affiliations" tables (NID in Affiliations). How are these network affiliations interconnected? In my own project, I have a simple profile table: CREATE TABLE [dbo].[Profiles]( [profileid] [int] IDENTITY(1,1) NOT NULL, [userid] [uniqueidentifier] NOT NULL, [username] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [applicationname] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [isanonymous] [bit] NULL, [lastactivity] [datetime] NULL, [lastupdated] [datetime] NULL, CONSTRAINT [PK__Profiles__1DB06A4F] PRIMARY KEY CLUSTERED ( [profileid] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY], CONSTRAINT [PKProfiles] UNIQUE NONCLUSTERED ( [username] ASC, [applicationname] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] One profile can have many affiliations. And one affiliation can have many profiles. And I would like to design it in such a way that relationships between affiliations tells me something about the associated profiles. In fact, based on the affiliations that users select, I would like to know how to infer as many things as possible about that person. My question is, how should I be designing my network affiliation tables and how do they operate per my above requirements? A rough SQL schema would be appreciated in your response. Thanks in advance...

    Read the article

  • Changing the character encoding of a MySQL database

    - by Julien Genestoux
    Our whole application is now able to handle UTF-8 and it will be our choice in terms of encoding all across our architecture. The last step is to change the encoding of our MySQL databases. Of course, ALTER TABLE db_table CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; should be able to convert each of the tables to the right UTF8 encoding, yet, is there anything else I should do? I believe that the my.cnf configuration file needs to be changed as well.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >