Search Results

Search found 14016 results on 561 pages for 'mysql'.

Page 29/561 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How to change root password for mysql and phpmyadmin

    - by Jon
    I've set up mysql and phpmyadmin and chose not to set a password when installing hoping that once set up i could login with root and no password but i get the following error from phpmyadmin Login without a password is forbidden by configuration (see AllowNoPassword) I have previously moved the phpmyadmin folder to /var/www/ I have tried changing the following line $cfg['Servers'][$i]['AllowNoPassword'] = false; to $cfg['Servers'][$i]['AllowNoPassword'] = true; but still had no success, so i am wondering is there a way i can change the root passwords for both so i can access phpmyadmin and create databases. Thanks

    Read the article

  • Next lowest value in MySQL Database [migrated]

    - by Justin Edwards
    SELECT * FROM `experience` WHERE `reqexp` <> '4793' ORDER BY 'lvl' DESC LIMIT 1 Here is what I want to do. I am making an online game for a client, and need to be able to use a mysql query with a random value, and find the level associated with that amount of experience. In this case, I need to find the next value lower than 4793 that already exists in the database so I can determine the players appropriate level. Any Ideas?

    Read the article

  • How to install MySQL 5.6?

    - by Ross Smith II
    I just installed Ubuntu 12.10 (amd64), and want to install a recent version of MySQL 5.6. If possible, I would like to install (not upgrade) it the "Debian Way' (i.e., using apt-get or dpkg). The only binaries I could find are here. Unfortunately, they are incomplete, as they only install files in /usr/share. If binaries aren't available, how could I install it from source, using the standard Debian method of installing from source. Thanks for any assistance.

    Read the article

  • Upgrade MySQL to 5.5 on Lucid, upgrade server to Precise or switch to Percona?

    - by xref
    Looking into upgrading mysql on our development server to which is running 10.04 so is stuck at MySQL 5.1, as it appears there is no apt-get support for upgrading to 5.5 except by certain 3rd party PPAs. So I'm looking for which route to take and what other people have done: a) Follow a couple year old guide to manually install MySQL 5.5 and then invest ongoing time into manually downloading and installing security updates every month or two? b) Upgrade 10.04 to 12.04, and from other peoples experience I work with spend several days working out the kinks of that large upgrade, then I'll have access to mysql 5.5 and easy apt-get installation of future security updates? c) Switch from MySQL to Percona Server 5.5 and get all the benefits of that version of mysql, plus easy apt-get updates with their PPA? d) Something else?

    Read the article

  • iptables change destination address base on result from mysql

    - by user1812225
    I need to change destination address of tcp/ip packets based on result of execution mysql query... SELECT `score` FROM `reputation` WHERE `ip` = packet.source_ip if (score < a) then packet.destination_ip = ... else packet.destination_ip = ... What ways of solving this problem do you see? Thanks. P.S. this is important that destination host knows REAL ip address where packet came from, not IP address of firewall.

    Read the article

  • Where to apply these akonadi settings for mysql-server?

    - by piedro
    I Am using an external mysql-server to work with akonadi. (using KDE4). Now I could solve all problems with it, but I still have to apply some settings to the "mysql-global.conf"-file of the server. That's what is suggested for example: # wait 365d before dropping the DB connection (default:8h) wait_timeout=31536000 So I tried to change this setting via the mysql console. But it's not reflected anywhere in the /etc/mysql/my.cnf file. (etc/akondai/mysql-global.conf seems to have no effect on the mysql-server either!) My question: Where to put these (or similar) settings to apply them in a way that akonadi won't drop the connection with an external server (globally I guess?)?

    Read the article

  • Cant connect to mysql using self signed SSL certificate

    - by carpii
    After creating a self-signed SSL certificate, I have configured my remote mysqld to use them (and ssl is enabled) I ssh into my remote server, and try connecting to its own mysqld using ssl (mysql server is 5.5.25).. ~> mysql -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Ok, I remember reading theres some problem with connecting to the same server via SSL. So I download the client keys down to my local box, and test from there... ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error Its unclear what this "SSL connection error" error refers to, but if I omit the -ssl-ca, then I am able to connect using SSL.. ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.25 MySQL Community Server (GPL) However, I believe that this is only encrypting the connection, and not actually verifying the validity of the cert (meaning I would be potentially vulnerable to man-in-middle attack) The ssl certs are valid (albeit self signed), and do not have a passphrase on them So my question is, what am I doing wrong? How can I connect via SSL, using a self signed certificate? MySQL Server version is 5.5.25 and the server and clients are Centos 5 Thanks for any advice Edit: Note that in all cases, the command is being issued from the same directory where the ssl keys reside (hence no absolute path)

    Read the article

  • Connecting to MySQL Server from PHP Command Line (MAMP)

    - by Austin White
    First of all, I'm using Mac OSX 1.6, MAMP 1.9, PHP 5.3.4, and MySQL 5.1.44. I'm in the process of setting up a video encoding service for a site using Chris Boulton's PHP-Resque and Redis. Once the worker process is fired and the videos have been encoded, I need to save their locations to a mysql database. The php script is being run from the shell, so that is where the issue begins. I import the mysql settings and when it attempts to connect, I get the following errors: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servn (trying to connect via tcp://MYSQL_SERVER:3306) in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::set_charset(): Couldn't fetch MySQLi_Extended in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 32 I realize that the error is occurring because it's trying to connect to tcp://MYSQL_SERVER:3306, when MySQL is on port 8889. I've been reading about Mac OSX and MAMP errors regarding the mysql.sock and I've gone through multiple forums and tried various fixes, but none have worked. I've tried PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5.3/bin/:/opt/local/bin:/opt/local/sbin:$PATH and sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock but neither have worked. I even ran a search on my machine for "3306" to find where it's being set, but because that's the normal default, I'm guessing it's not being set explicitly. Any clues on how to fix this rather challenging error?

    Read the article

  • Recover mysql database - mysqldump gives "table <tablename> doesn't exist (1146)"

    - by Matthew
    Backstory Ubuntu died (wouldn't boot) and I couldn't fix it. I booted a live cd to recover the important stuff and saved it to my NAS. One of the things I backed up was /var/lib/mysql. Reinstalled with Linux Mint because I was on Ubuntu 10.0.4 this was a good opportunity to try a new distro (and I don't like Unity). Now I want to recover my old mediawiki, so I shut down mysql daemon, cp -R /media/NAS/Backup/mysql/mediawiki@002d1_19_1 /var/lib/mysql/, set file ownership and permissions correctly, and start mysql back up. Problem Now I'm trying to export the database so I can restore the database, but when I execute the mysqldump I get an error: $ mysqldump -u mediawikiuser -p mediawiki-1_19_1 -c | gzip -9 > wiki.2012-11-15.sql.gz Enter password: mysqldump: Got error: 1146: Table 'mediawiki-1_19_1.archive' doesn't exist when using LOCK TABLES Things I've tried I tried using --skip-lock-tables but I get this: Error: Couldn't read status information for table archive () mysqldump: Couldn't execute 'show create table `archive`': Table 'mediawiki-1_19_1.archive' doesn't exist (1146) I tried logging in to mysql and I can list the tables that should be there, but trying to describe or select from them errors out the same way as the dump: mysql> show tables; +----------------------------+ | Tables_in_mediawiki-1_19_1 | +----------------------------+ | archive | | category | | categorylinks | ... | user_properties | | valid_tag | | watchlist | +----------------------------+ 49 rows in set (0.00 sec) mysql> describe archive; ERROR 1146 (42S02): Table 'mediawiki-1_19_1.archive' doesn't exist I believe mediawiki was installed using innodb and binary data. Am I screwed or is there a way to recover this?

    Read the article

  • The MySQL service is in the status "starting" on windows

    - by andres descalzo
    I have several months working to "MySQL-5-1-47" on windows 2003. When I restarted, the service "MySQL" stay in this state "starting". The only way to raise the service was running the program directly: C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld This is the MySQL error log 100906 16:07:29 [Note] Event Scheduler: Purging the queue. 0 events 100906 16:07:32 InnoDB: Starting shutdown... 100906 16:07:37 [Note] Plugin 'FEDERATED' is disabled. 100906 16:07:38 InnoDB: Shutdown completed; log sequence number 0 44233 100906 16:07:38 [Note] mysqld: Shutdown complete 100906 16:07:39 InnoDB: Started; log sequence number 0 44233 100906 16:17:21 [Note] Plugin 'FEDERATED' is disabled. 100906 16:17:22 InnoDB: Started; log sequence number 0 44233 100906 16:22:01 [Note] Plugin 'FEDERATED' is disabled. 100906 16:22:02 InnoDB: Started; log sequence number 0 44233 100906 16:22:02 [Note] Event Scheduler: Loaded 0 events 100906 16:22:02 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld.exe: ready for connections. Version: '5.1.47-community' socket: '' port: 3306 MySQL Community Server (GPL) The last lines are after loading the program from the shell Thank.

    Read the article

  • MySQL unstable behavior with external stroge

    - by onurozcelik
    In our project we are using MySQL 5.0.90(InnoDB engine) server with an external storage. We store MySQL data files in an external storage. When the external storage down for a reason we have unstable behaviours. So we made some tests. In Windows Server 2008 We closed external storage physically. MySQL service stoped and we could not reach the server. Then we opened the storage unit and we could not start service until we reconfigured MySQL. We made storage unit offline from operating system. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online and we could start the service(not automatically). In Windows Server 2003 We made storage unit offline. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online we could not start the service until we reinstalled MySQL. Before reinstall we tried to reconfigure but it did not work. We closed external storage physically. MySQL service stoped and we could not reach the server. After that we opened the storage unit and we could start service(not automatically) We expect service to autostart after storage unit is online/open. But these tests show unstable behaviors. Is there any solutions to this.

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • mysql cluster problem in ubuntu

    - by Firman
    I have a problem while installing and configuring mysql cluster runnign on ubuntu 10.10 This is configuration for Cluster management [NDBD DEFAULT] NoOfReplicas=2 DataMemory=10MB IndexMemory=25MB MaxNoOfTables=256 MaxNoOfOrderedIndexes=256 MaxNoOfUniqueHashIndexes=128 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] [NDB_MGMD] Id=1 # the NDB Management Node (this one) HostName=192.168.10.101 [NDBD] Id=2 # the first NDB Data Node HostName=192.168.10.11 DataDir= /var/lib/mysql-cluster [NDBD] Id=3 # the second NDB Data Node HostName=192.168.10.12 DataDir=/var/lib/mysql-cluster [MYSQLD] [MYSQLD] and this is configuration for both node : [mysqld] ndbcluster ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER [mysql_cluster] ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER After running all node and management, and I use ndb_mgm, the type 'show' command, and something appear like this : ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.10.11 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master) id=3 @192.168.10.12 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.101 (mysql-5.1.39 ndb-7.0.9) [mysqld(API)] 1 node(s) id=4 (not connected, accepting connect from 192.168.10.101) look at two last line.. not as what http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-servers.html look like (see at point 4) anyone have ever had this problem ?

    Read the article

  • How to optimize this MySQL query

    - by James Simpson
    This query was working fine when the database was small, but now that there are millions of rows in the database, I am realizing I should have looked at optimizing this earlier. It is looking at over 600,000 rows and is Using where; Using temporary; Using filesort (which leads to an execution time of 5-10 seconds). It is using an index on the field 'battle_type.' SELECT username, SUM( outcome ) AS wins, COUNT( * ) - SUM( outcome ) AS losses FROM tblBattleHistory WHERE battle_type = '0' && outcome < '2' GROUP BY username ORDER BY wins DESC , losses ASC , username ASC LIMIT 0 , 50

    Read the article

  • MySQL query from subquery not working

    - by James Goodwin
    I am trying to return a number based on the count of results from a table and to avoid having to count the results twice in the IF statement I am using a subquery. However I get a syntax error when trying to run the query, the subquery I have tested by itself runs fine. Any ideas what is wrong with the query? The syntax looks correct to me SELECT IF(daily_count>8000,0,IF(daily_count>6000,1,2)) FROM ( SELECT count(*) as daily_count FROM orders201003 WHERE DATE_FORMAT(date_sub(curdate(), INTERVAL 1 DAY),"%d-%m-%y") = DATE_FORMAT(reqDate,"%d-%m-%y") ) q

    Read the article

  • Problem installing MySQL gem on Fedora

    - by Shreyas Satish
    When I try rake db:migrate, I get the following error: !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! no such file to load -- mysql And when I try to "gem install mysql" Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb Can't find header files for ruby. Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. $ sudo gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config has also been tried but same error. I'm on a Fedora 10. Help will be much appreciated.Cheers!

    Read the article

  • mySQL date range problem

    - by StealthRT
    Hey all i am in need of a little help with figuring out how to get a range of days for my select query. Here is the code i am trying out: select id, idNumber, theDateStart, theDateEnd from clients WHERE idNumber = '010203' AND theDateStart >= '2010-04-09' AND theDateEnd <= '2010-04-09'; This is what the data in the table looks like: TheDateStart = 2010-04-09 TheDateEnd = 2010-04-11 When testing that code above, it does not populate anything. If i take out the TheEndDate, it populates but with some other tables data as well which it should not do (it should only get one record). I know the problem is within the two date's. I'm not sure how to go about getting a date range for theDateStart and theDateEnd since if someone tries it, say, on 2010-04-10, it still needs to know its within rage of the 2010-04-09 - 2010-04-11. But right now, it does not... Any help would be great! :o) David

    Read the article

  • Optimize this MySQL query?

    - by HipHop-opatamus
    The following query takes FOREVER to execute (30+ hrs on a Macbook w/4gig ram) - I'm looking for ways to make it run more efficiently. Any thoughts are appreciated! CREATE TABLE fc AS SELECT threadid, title, body, date, userlogin FROM f WHERE pid NOT IN (SELECT pid FROM ft) ORDER BY date; (table "f" is ~1 Gig / 1,843,000 row, table "ft" is 168mb, 216,000 rows) )

    Read the article

  • mysql query the latest date

    - by user295189
    I am running this query SEL ECT sh.*, u.initials AS initals FROM database1.table1 AS sh JOIN database2.user AS u ON u.userID = sh.userid WHERE id = 123456 AND dts = ( SELECT MAX(dts) from database1.table1 ) ORDER BY sort_by, category In the table1 I have records like this dts status category sort_by 2010-04-29 12:20:27 Civil Engineers Occupation 1 2010-04-28 12:20:27 Civil Engineers Occupation 1 2010-04-28 12:20:54 Married Marital Status 2 2010-04-28 12:21:15 Smoker Tobbaco 3 2010-04-27 12:20:27 Civil Engineers Occupation 1 2010-04-27 12:20:54 Married Marital Status 2 2010-04-27 12:21:15 Smoker Tobbaco 3 2010-04-26 12:20:27 Civil Engineers Occupation 1 2010-04-26 12:20:54 Married Marital Status 2 2010-04-26 12:21:15 Smoker Tobbaco 3 so if you look at my data, I am choosing the latest entry by category and sort_id. however in some case such as on 29th (2010-04-29 12:20:27) I have only one record. So in this case I want to show occupation for latest and then the rest of them (latest). But currently it displays only one row. Thanks

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • Two n x m relationships with the same table in mysql

    - by Christian
    I want to create a database in which there's an n x m relationship between the table drug and the table article and an n x m relationship between the table target and the table article. I get the error: Cannot delete or update a parent row: a foreign key constraint fails What do I have to change in my code? DROP TABLE IF EXISTS `textmine`.`article`; CREATE TABLE `textmine`.`article` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Pubmed ID', `abstract` blob NOT NULL, `authors` blob NOT NULL, `journal` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`drugs`; CREATE TABLE `textmine`.`drugs` ( `id` int(10) unsigned NOT NULL COMMENT 'This ID is taken from the biosemantics dictionary', `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`targets`; CREATE TABLE `textmine`.`targets` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`containstarget`; CREATE TABLE `textmine`.`containstarget` ( `targetid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `target` (`targetid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `target` FOREIGN KEY (`targetid`) REFERENCES `targets` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`contiansdrug`; CREATE TABLE `textmine`.`contiansdrug` ( `drugid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `drug` (`drugid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `drug` FOREIGN KEY (`drugid`) REFERENCES `drugs` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >