Search Results

Search found 20838 results on 834 pages for 'mysql num rows'.

Page 102/834 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Mysql partitioning: Partitions outside of date range is included

    - by Sturlum
    Hi, I have just tried to configure partitions based on date, but it seems that mysql still includes a partition with no relevant data. It will use the relevant partition but also include the oldest for some reason. Am I doing it wrong? The version is 5.1.44 (MyISAM) I first added a few partitions based on "day", which is of type "date" ALTER TABLE ptest PARTITION BY RANGE(TO_DAYS(day)) ( PARTITION p1 VALUES LESS THAN (TO_DAYS('2009-08-01')), PARTITION p2 VALUES LESS THAN (TO_DAYS('2009-11-01')), PARTITION p3 VALUES LESS THAN (TO_DAYS('2010-02-01')), PARTITION p4 VALUES LESS THAN (TO_DAYS('2010-05-01')) ); After a query, I find that it uses the "old" partition, that should not contain any relevant data. mysql> explain partitions select * from ptest where day between '2010-03-11' and '2010-03-12'; +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | ptest | p1,p4 | range | day | day | 3 | NULL | 79 | Using where | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ When I select a single day, it works: mysql> explain partitions select * from ptest where day = '2010-03-11'; +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | 1 | SIMPLE | ptest | p4 | ref | day | day | 3 | const | 39 | | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+

    Read the article

  • MySQL " identify storage engine statement"

    - by sammysmall
    This IS NOT a Homework question! While building my current student database project I realized that I may want to identify comprehensive information about a database design in the future. More-so if I am fortunate enough to get a job in this field and were handed a database project how could I break down certain elements for identification... In all of my previous designs I have been using MySQL Community Server (GPL) 5.1.42, I thought (duh) that I was using the MyISAM based on most of my text-book instruction and MySQL 5.0 Reference Manual :: 13 Storage Engines :: 13.1 The MyISAM Storage Engine I determined that this was in fact incorrect for this version and the use of "SHOW ENGINES" at the console... No problem, figured out why they have "versions" the need to pay attention to what version is being used, and the need for a means to determine what I am about to mess up "if" I do not pay attention to detail... Q1. Specifically what statement will identify the version used by someone elses initial database creation? (since I created my own databases I know what version I used) Q2. Specifically what statement will identify the storage engine that the developer used when creating the database. (I specified a particular database in my collection then tried SHOW Engine, did not work, then tried to just get the metadata from one table in that database: mysql SELECT duck_cust, table_type, engine - FROM INFORMATION_SCHEMA.tables - WHERE table_schema = 'tp' - ORDER BY table_type ASC, table_name DESC; as this was not really what I wanted (and did not work) I am looking for some direction from the pros... Q3. (If you really have the inclination to continue helping) If I were to access a database from an earlier/later "version" are there backward/forward compatibility issues for maintaining/updating data between versions? Please and Thank you in advance for your time and efforts! sammysmall

    Read the article

  • authorise user from mysql database

    - by Jacksta
    I suck at php, and cant find the error here. The script gets 2 variables "username" and "password" from a html from then check them against a MySQL databse. When I run this I get the follow error "Query was empty" <? if ((!$_POST[username]) || (!$_POST[password])) { header("Location: show_login.html"); exit; } $db_name = "testDB"; $table_name = "auth_users"; $connection = @mysql_connect("localhost", "admin", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $slq = "SELECT * FROM $table_name WHERE username ='$_POST[username]' AND password = password('$_POST[password]')"; $result = @mysql_query($sql, $connection) or die(mysql_error()); $num = mysql_num_rows($result); if ($num != 0) { $msg = "<p>Congratulations, you're authorised!</p>"; } else { header("Location: show_login.html"); exit; } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Secret Area</title> </head> <body> <? echo "$msg"; ?> </body> </html>

    Read the article

  • Getting a UIImage from MySQL using PHP and jSON

    - by Daniel
    I'm developing a little news reader that retrieves the info from a website by doing a POST request to a URL. The response is a jSON object with the unread-news. E.g. the last news on the App has a timeStamp of "2013-03-01". When the user refreshes the table, it POSTS "domain.com/api/api.php?newer-than=2013-03-01". The api.php script goes to the MySQL database and fetches all the news posted after 2013-03-01 and prints them json_encoded. This is // do something to get the data in an array echo $array_of_fetched_data; for example the response would be [{"title": "new app is coming to the market", "text": "lorem ipsum dolor sit amet...", image: XXX}] the App then gets the response and parses it, obtaining an NSDictionary and adds it to a Core Data db. NSDictionary* obtainedNews = [NSJSONSerialization JSONObjectWithData:responseData options:kNilOptions error:&error]; My question is: How can I add an image to the MySQL database, store it, pass it using jSON trough a POST HTTP Request and then interpret it as an UIImage. It's clear that to store an UIImage in CoreData, they must be transform into/from NSData. How can I pass the NSData back and forth to a MySQL db using php and jSON? How should I upload the image to the db? (Serialized, as a BLOB, etc)

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • PHP and MySQL echoing out a Table

    - by user1631702
    Okay, so I've done this before, and it worked. I am trying to echo out specific rows on my database in a table. Here is my code: <?php $connect = mysql_connect("localhost", "xxx", "xxx") or die ("Hey loser, check your server connection."); mysql_select_db("xxx"); $quey1="select * from `Ad Requests`"; $result=mysql_query($quey1) or die(mysql_error()); ?> <table border=1 style="background-color:#F0F8FF;" > <caption><EM>Student Record</EM></caption> <tr> <th>Student ID</th> <th>Student Name</th> <th>Class</th> </tr> <?php while($row=mysql_fetch_array($result)){ echo "</td><td>"; echo $row['id']; echo "</td><td>"; echo $row['twitter']; echo "</td><td>"; echo $row['why']; echo "</td></tr>"; } echo "</table>"; ?> It gives me no errors, but It just shows a blank table with none of these rows. My Question: How come this wont show any rows in the table, what am I doing wrong?

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

  • error in arabic script in mysql

    - by fusion
    i inserted data in mysql database which includes arabic script. while the output displays arabic correctly, the data in mysql looks like garbage. something like this: '&#1589;&#1614;&#1608;&#1605;&#1615; &#1579;&#1614;&#1604;&#1575;&#1579;&#1614;&#1577;&#1616; &#1571;&#1610;&#1617;&#1575;&#1605;&#1613; &#1605;&#1616;&#1606; &#1603;&#1615;&#1604;&#1617;&#1616; &#1588;&#1614;&#1607;&#1585;&#1613; &#1600; &#1571;&#1585;&#1576;&#1614;&#1593;&#1575;&#1569;&#1615; &#1576;&#1614;&#1610;&#1606;&#1614; &#1582;&#1614; should i be worried about this? if yes, how do i make it appear in proper arabic script in mysql? thanks.

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • formatting mysql data for ouptut into a table

    - by bsandrabr
    Following on from a question earlier today this answer was given to read the data into an array and separate it to print vehicle type and then some data for each vehicle. <?php $sql = "SELECT * FROM apparatus ORDER BY vehicleType"; $getSQL = mysql_query($sql); // transform the result set: $data = array(); while ($row = mysql_fetch_assoc($getSQL)) { $data[$row['vehicleType']][] = $row; } ?> <?php foreach ($data as $type => $rows): ?> <h2><?php echo $type?></h2> <ul> <?php foreach ($rows as $vehicleData):?> <li><?php echo $vehicleData['name'];?></li> <?php endforeach ?> </ul> <?php endforeach ?> This is almost perfect for what I want to do but I need to print out two columns from the database ie ford and mondeo before going into the second foreach loop. I've tried print $rows['model'] and all the other combinations I can think of but that doesn't work. Any help much appreciated

    Read the article

  • Error when feeding a mysql db with a python-parsed data

    - by Barnabe
    I use this bit of code to feed some data i have parsed from a web page to a mysql database c=db.cursor() c.executemany( """INSERT INTO data (SID, Time, Value1, Level1, Value2, Level2, Value3, Level3, Value4, Level4, Value5, Level5, ObsDate) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", clean_data ) The parsed data looks like this (there are several hundred such lines) clean_data = [(161,00:00:00,8.19,1,4.46,4,7.87,4,6.54,null,4.45,6,2010-04-12),(162,00:00:00,7.55,1,9.52,1,1.90,1,4.76,null,0.14,1,2010-04-12),(164,00:00:00,8.01,1,8.09,1,0,null,8.49,null,0.20,2,2010-04-12),(166,00:00:00,8.30,1,4.77,4,10.99,5,9.11,null,0.36,2,2010-04-12)] if i hard code the data as above mySQL accepts my request (except for some quibbles about formatting) but if the variable clean_data is instead defined as the result of the parsing code, like this: cleaner = [(""" $!!'""", ')]'),(' $!!', ') etc etc] def processThis(str,lst): for find, replace in lst: str = str.replace(find, replace) return str clean_data = processThis(data,cleaner) then i get the dreaded "TypeError: not enough arguments for format string" After playing with formatting options for a few hours (I am very new to this) I am confused... what is the difference between the hard coded data and the result of the processThis function as fas as mySQL is concerned? Any idea greatly appreciated...

    Read the article

  • Run multiple MySQL queries based on a series of ifs

    - by OldWest
    I am just getting started on this complex query I need to write and was hoping for any suggestions or feedback regarding table structure and the actual query itself.. I've already created my tables and populated test data, and now just trying to sort out how and what is possible within MySQL. Here is an outline of the problem: End result: Listing of rates based on specific queried criteria (see below): Age: [ 27 ] Spouse Age: [ 25 ] Num of Children: [ 3 ] Zip Code: [ 97128 ] The problem I am running into is each company that provides rates has a unique way of dealing with the rate. And I am looking for the best approach for multiple queries based on the company (one query with results for each company more or less all combined into one result set). Here are some facts: - Each company deals with zip code ranges which assist in the query result. - Each company has a different method of calculating the rate based on the Applicant, Spouse, Num of Children: Example, a) Company A determines rate by: Applicant + Spouse + Child(ren) = rate (age is pertinent to the applicant within a range). b) Company B determines the rate by total number of applicants like: 1, 2, 3, 4, 5, 6+ = rate (and age is ignored). First off, what would I call this type of query? Multiple nested query? And should I intertwine php within it to determine the If()s ... I apologize if this thread lacks sufficient data, so please tell me anything you would like to see.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • XAMPP, MAMP, MySQL, PDO - A deadly combination?

    - by Rich
    Hey folks, Previously I've worked on a Symfony project (MySQL PDO based) with XAMPP, with no problems. Since then, I've moved to MAMP - I prefer this - but have hit a snag with my database connection. I've created a test.php like this: <?php try { $dbh = new PDO('mysql:host=localhost;dbname=xxx;port=8889', 'xxx', 'xxx'); foreach($dbh->query('SELECT * from FOO') as $row) { print_r($row); } $dbh = null; } catch (PDOException $e) { print "Error!: " . $e->getMessage() . "<br/>"; die(); } ?> Obviously the *xxx*s are real db connection details. Which when served by MAMP seems to work fine. From terminal however I keep getting the following error when running the file: Error!: SQLSTATE[28000] [1045] Access denied for user 'xxx'@'localhost' (using password: YES) Not sure if the terminal is aiming at a different MySQL socket or something along those lines; but I've tried pointing it to the MAMP socket with a local php.ini file. Any help would be greatly appreciated.

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • how to delete in Mysql

    - by Ian Moss
    i want to delete a element in mysql. the problem is that my connection not succesfully open and they give me error unable to connect even same connectionstring work elsewhere in current project. well when my code open the connection they work fine. but a small function try to delete a row in Mysql. i am confused what is goes wrong because :- same connectionstring work elsewhere in project i trying but a function only have a project [unable to connect] the [unable to connect] problem come when my code trying to delete the rows in mysql. i use sqlyog to open the connection and they work fine as other code work and their is no problem i got when i run the command on sqlyog. conclusion:- why connection not open if they work elsewhere in the project and in also in sqlyog. any reason for unable to connect. because connection can not open offcourse command never run so what is reason upon the connection unable to connect. well any suggestion , thing you feel and trick you have to solve this issue i have. thanks

    Read the article

  • Print table data mysql php

    - by Marcelo
    Hi people, i'm having a problem trying to print some data of a table. I'm new at this php mysql stuff but i think my code is right. Here it is: <html> <body> <h1>Lista de usuários</h1> <?php $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="sabs"; // Database name $tbl_name="doador"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $sql="SELECT * FROM $tbl_name"; $result=mysql_query($sql); while($rows = mysql_fetch_array($result)){ echo $row['id'] . " " .$row['nome'] . " " . $row['sobrenome'] . " " . $row['email'] . " " . $row['login'] . " " . $row['senha'] . " " . $row['idade'] . " ". $row['peso'] . " " . $row['fuma'] . " " . $row['sexo'] . " " . $row['doencas']; echo "<BR/>"; } mysql_close(); ?> </body> </html> All columns of the echo command exist in my table in the database. Don't get why it's not printing those values. Thanks for the attention.

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • UIDs for data objects in MySQL

    - by Callash
    Hi there, I am using C++ and MySQL. I have data objects I want to persist to the database. They need to have a unique ID for identification purposes. The question is, how to get this unique ID? Here is what I came up with: 1) Use the auto_increment feature of MySQL. But how to get the ID then? I am aware that MySQL offers this "SELECT LAST_INSERT_ID()" feature, but that would be a race condition, as two objects could be inserted quite fast after each other. Also, there is nothing else that makes the objects discernable. Two objects could be created pretty much at the same time with exactly the same data. 2) Generate the UID on the C++ side. No dice, either. There are multiple programs that will write to and read from the database, who do not know of each other. 3) Insert with MAX(uid)+1 as the uid value. But then, I basically have the same problem as in 1), because we still have the race condition. Now I am stumped. I am assuming that this problem must be something other people ran into as well, but so far, I did not find any answers. Any ideas?

    Read the article

  • MYSQL variables - SET @var

    - by Lizard
    I am attempting to create a mysql snippet that will analyse a table and remove duplicate entries (duplicates are based on two fields not entire record) I have the following code that works when I hard code the variables in the queries, but when I take them out and put them as variables I get mysql errors, below is the script SET @tblname = 'mytable'; SET @fieldname = 'myfield'; SET @concat1 = 'checkfield1'; SET @concat2 = 'checkfield2'; ALTER TABLE @tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL; UPDATE @tblname SET `tmpcheck` = CONCAT(@concat1,'-',@concat2); CREATE TEMPORARY TABLE `tmp_table` ( `tmpfield` VARCHAR( 100 ) NOT NULL ) ENGINE = MYISAM ; INSERT INTO `tmp_table` (`tmpfield`) SELECT @fieldname FROM @tblname GROUP BY `tmpcheck` HAVING ( COUNT(`tmpcheck`) > 1 ); DELETE FROM @tblname WHERE @fieldname IN (SELECT `tmpfield` FROM `tmp_table`); ALTER TABLE @tblname DROP `tmpcheck`; I am getting the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL' at line 1 Is this because I can't use a variable for a table name? What else could be wrong or how wopuld I get around this issue. Thanks in adavnce

    Read the article

  • How can I get around MySQL Errcode 13 with SELECT INTO OUTFILE?

    - by Ryan Olson
    but I am trying to dump the contents of a table to a csv file using a MySQL SELECT INTO OUTFILE statement. If I do: SELECT column1, column2 INTO OUTFILE 'outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; outfile.csv will be created on the server in the same directory this database's files are stored in. However, when I change my query to: SELECT column1, column2 INTO OUTFILE '/data/outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; I get: ERROR 1 (HY000): Can't create/write to file '/data/outfile.csv' (Errcode: 13) Errcode 13 is a permissions error, even if I change ownership of /data to mysql:mysql and give it 777 permissions. MySQL is running as user "mysql". Strangely, I can create the file in /tmp, just not in any other directory I've tried, even with permissions set such that user mysql should be able to write to the directory. This is MySQL 5.0.75 running on Ubuntu.

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • Why are transactions not rolling back when using SpringJUnit4ClassRunner/MySQL/Spring/Hibernate

    - by Trevor
    I am doing unit testing and I expect that all data committed to the MySQL database will be rolled back... but this isn't the case. The data is being committed, even though my log was showing that the rollback was happening. I've been wrestling with this for a couple days so my setup has changed quite a bit, here's my current setup. LoginDAOTest.java: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"file:web/WEB-INF/applicationContext-test.xml", "file:web/WEB-INF/dispatcher-servlet-test.xml"}) @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) public class UserServiceTest { private UserService userService; @Test public void should_return_true_when_user_is_logged_in () throws Exception { String[] usernames = {"a","b","c","d"}; for (String username : usernames) { userService.logUserIn(username); assertThat(userService.isUserLoggedIn(username), is(equalTo(true))); } } ApplicationContext-Text.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/test"/> <property name="username" value="root"/> <property name="password" value="Ecosim07"/> </bean> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="userService" class="Service.UserService"> <property name="userDAO" ref="userDAO"/> </bean> <bean id="userDAO" class="DAO.UserDAO"> <property name="hibernateTemplate" ref="hibernateTemplate"/> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>/himapping/User.hbm.xml</value> <value>/himapping/setup.hbm.xml</value> <value>/himapping/UserHistory.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager" p:sessionFactory-ref="sessionFactory"/> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> </bean> </beans> I have been reading about the issue, and I've already checked to ensure that the MySQL database tables are setup to use InnoDB. Also I have been able to successfully implement rolling back of transactions outside of my testing suite. So this must be some sort of incorrect setup on my part. Any help would be greatly appreciated :)

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >