Search Results

Search found 40281 results on 1612 pages for 'mysql query browser'.

Page 253/1612 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • how do indices in mysql tables (MyISAM) work?

    - by understack
    Few basic doubts I have: 1. Is primary key column automatically indexed? 2. What should be criteria to select index column? 3. When should I club multiple columns? 4. Does MyISAM or InnoDB has any affect on this? 5. Are they really required, specially in case if primary key column is automatically indexed? Thanks.

    Read the article

  • Mysql Constraint problem

    - by Bramjam
    this is my table /* oefenreeks leerplan */ CREATE TABLE leerplan_oefenreeks ( leerplan_oefenreeks_id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, leerplan_id INT NOT NULL, oefenreeks_id INT NOT NULL, plaats INT NOT NULL ); /* fk */ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_leerplan FOREIGN KEY(leerplan_id) REFERENCES leerplan (leerplan_id) ON DELETE CASCADE; ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_oefenreeks FOREIGN KEY(oefenreeks_id) REFERENCES oefenreeks (oefenreeks_id) ON DELETE CASCADE; /* unique s *//*when I execute the nexline, my fk_leerp_oefenr_leerplan constraint vanishes/disappears*/ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr UNIQUE(leerplan_id, oefenreeks_id); ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr_plaats UNIQUE(leerplan_id, plaats); when I go and check only 3 constraints exist (fk_leerp_oefenr_leerplan is gone) I don't understand why this happens, plz tell me (if you need more info just ask ;)

    Read the article

  • mysql ORDER BY date_time field not sorting as expected

    - by undefined
    I have a field in my database that stores the datetime that an item was added to the database. If I want to sort the items in reverse chronological order I would expect that doing ORDER by date_added DESC would do the trick. But this seems not to work. I also tried ORDER by UNIX_TIMESTAMP(date_added) but this still did not sort the results as I would expect. I also have an auto-increment field that I can use to sort items so I will use this, but I am curious as to why ORDER by datetime was not behaving as expected. any ideas?

    Read the article

  • php: parse error on mysql query

    - by dwstein
    I'm getting the following error: Parse error: syntax error, unexpected T_VARIABLE in /home/a4999406/public_html/willingLog.html on line 48 on the following code (line 48 is first row of this code): $rows = mysql_num_rows($result); for ($j=0; $j<$rows: ++$j) { echo 'ID: ' . mysql_result($result, $j, 'id') . '<br />'; echo 'First: ' . mysql_result($result, $j, 'first') . '<br />'; echo 'Last: ' . mysql_result($result, $j, 'last') . '<br />'; echo 'Email: ' . mysql_result($result, $j, 'email') . '<br />'; } Anyone know what i'm doing wrong?

    Read the article

  • MYSQL question - AND or OR?

    - by U22199
    Which is a better way to select ans and quest from the table? SELECT * FROM tablename WHERE option='ans' OR option='quest'"; OR SELECT * FROM tablename WHERE option='ans' AND option='quest'"; Thanks so much!

    Read the article

  • selecting more than 1 table to get content from [php/Mysql]

    - by SAFSOF
    Hi There i want to get the latest threads/messages i made my code then i made function that calls the code to show last messages in specific board it works great now i want to get latest messages from 2 boards or more in the same function this is the part that chooses the board AND b.id_board = t.id_board' . (empty($vars) ? '' : ' AND b.id_board = ' . $vars . ''). ' i tried to use functionname(1.2.3); but he says no board with id 1.2.3 i tried ("1,2,3") same i wish i made it clear for you i apreciate your help

    Read the article

  • MySQL database populated dropdown box and PHP search

    - by Sanel Bajric
    I have question regarding search on webpage with textbox and dropdown box. I have table with fields: ID First name Last name Company name Occupation Description Now i need to make search form which will be populated from database (field Occupation) and textbox where I can put whatever I want, and then get results from database based on those on web page. I am really sorry but i am totally begginer and only need some examples of such kind of code and much help :) Thank you

    Read the article

  • LINQ Query with 3 levels

    - by BahaiResearch.com
    I have a business object structured like this: Country has States, State has Cities So Country[2].States[7].Cities[5].Name would be New York Ok, I need to get a list of all the Country objects which have at least 1 City.IsNice == true How do I get that?

    Read the article

  • delete taking really long time! mysql

    - by every_answer_gets_a_point
    i am doing this: delete calibration_2009 from calibration_2009 join batchinfo_2009 on calibration_2009.rowid = batchinfo_2009.rowid where batchinfo_2009.reporttime like '%2010%'; both tables have about 500k lines of data i suspect that 250k match the criteria to be deleted so far it has been running for 2 hours!!! is there something wrong?

    Read the article

  • PHP & MySQL delete image link problem

    - by IMAGE
    I'm trying to create a delete image link if the image is present and when the user clicks the delete image link it should delete the image. But for some reason this is not working can someone help me fix the delete image link problem? Thanks! Here is the PHP code. if (isset($_POST['delete_image'])) { $img_dir = "../members/" . $user_id . "/images/thumbs/"; $img_thmb = "../members/" . $user_id . "/images/"; $image_name = $row['image']; if(file_exists($img_dir . $image_name)){ if(unlink($img_dir.$image_name) && unlink($img_thmb.$image_name)){ $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli, "DELETE FROM users* WHERE image_id = '.$image_id.' AND user_id = '$user_id'"); }else{ echo '<p class="error">Sorry unable to delete image file!</p>'; } } } if(isset($_POST['image']) || !empty($image)) { echo '<a href="'. $_POST['delete_image'] .'">Delete Image</a>'; }

    Read the article

  • MySQL: Select remaining rows

    - by Bjork24
    I've searched everywhere for this, but I can't seem to find a solution. Perhaps I'm using the wrong terms. Either way, I'm turning to good ol' trusty S.O. to help my find the answer. I have two tables, we'll call them 'tools' and 'installs' tools = id, name, version installs = id, tool_id, user_id The 'tools' table records available tools, which are then installed by a user and recorded in the 'installs' table. Selecting the installed tools are simple enough: SELECT tools.name FROM tools LEFT JOIN installs ON tools.id = installs.tool_id WHERE user_id = 99 ; How do I select the remaining tools -- the ones that have yet to be installed by user #99? I'm sorry if this is painfully obvious, but I just can't seem to figure it out! Thanks for the help!

    Read the article

  • Bulk Update in MYSQl

    - by user351806
    I have a site which has client side and admin side. There is a table called account History. which contains fields like uid | accountBalance | PaymentStatus | Date. Now this table has to be updated every month for all the paid users and the table is bulk. So what is the best way to update the table every month.Do i need to select all the uid's and update.

    Read the article

  • Using query to change table mapping

    - by crapbag
    I have a table mytable( id, key, value). I realize that key is generating a lot of data redundancy since my key is a string. (my keys are really long, but repetititve) How do I build a separate table out that has (key, keyID) and then alternate my table to be mytable( id, keyID, value) and keyTable(keyID, key) ?

    Read the article

  • mysql query for change in values in a logging table

    - by kiasectomondo
    I have a table like this: Index , PersonID , ItemCount , UnixTimeStamp 1 , 1 , 1 , 1296000000 2 , 1 , 2 , 1296000100 3 , 2 , 4 , 1296003230 4 , 2 , 6 , 1296093949 5 , 1 , 0 , 1296093295 Time and index always go up. Its basically a logging table to log the itemcount each time it changes. I get the most recent ItemCount for each Person like this: SELECT * FROM table a INNER JOIN ( SELECT MAX(index) as i FROM table GROUP BY PersonID) b ON a.index = b.i; What I want to do is get get the most recent record for each PersonID that is at least 24 hours older than the most recent record for each Person ID. Then I want to take the difference in ItemCount between these two to get a change in itemcount for each person over the last 24 hours: personID ChangeInItemCountOverAtLeast24Hours 1 3 2 -11 3 6 Im sort of stuck with what to do next. How can I join another itemcount based on latest adjusted timestamp of individual rows?

    Read the article

  • MySQL: averaging with nulls...

    - by Zombies
    Is there a simple way I can exclude nulls from affecting the avg? They appear to count as 0, which is not what I want. I simply don't want to take their average into account, yet here is the catch, I can't drop them from the result set, as that record has data on it that I do need.

    Read the article

  • MySQL, PHP, How Many in GROUP

    - by 0Neji
    I'm trying to create a table which outputs a list of users and how many times they've logged in. A new row in the table is created every time that someone logs in so there is multiple rows for one user. Now, I'm trying using the following statement to pull the data out: SELECT * FROM logins GROUP BY user ORDER BY timestamp DESC Which is working fine but now there is a column in my HTML table which should show how many times the user has logged in. How do I go about counting the amount of rows in each group?

    Read the article

  • Problem with PHP & MySQL

    - by Shahd
    I wrote this statements but it is not work :( ... can you tell me why? HTML: <form action="join.php" method="post"> <label name="RoomName">Room1</label> </form> PHP: $roomName = $_POST['RoomName']; $roomID = "SELECT RoomID FROM rooms WHERE RoomName = $roomName";

    Read the article

  • TIME REDUCE(OPTIMISE QUERY)

    - by user2527657
    select a.userid,(select firstName from user where userid=NOTUSED.userid) as z, (select max(login_time) from userLoginTime AS b where userid = a.user_id GROUP BY b.user_id ORDER BY b.user_id) as y From(SELECT DISTINCT a.user_id FROM user AS a LEFT OUTER JOIN (SELECT (userid) FROM userlogintime where serialid=15400012)AS b ON user.user_id = b.user_id where a.Serialid=15400012 AND b.userid IS NULL) NOTUSED, Relation r, user a where r.childuserid = NOTUSED.userid and guarduserid = a.userid

    Read the article

  • How do you fix a MySQL “Incorrect key file” error when you can’t repair the table?

    - by Wayne M
    I'm trying to run a rather large query that is supposed to run nightly to populate a table. I'm getting an error saying Incorrect key file for table '/var/tmp/#sql_201e_0.MYI'; try to repair it but the storage engine I'm using (whatever the default is, I guess?) doesn't support repairing tables. how do I fix this so I can run the query? We are under pressure to get this table loaded for a client.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >