Search Results

Search found 33911 results on 1357 pages for 'mysql select'.

Page 202/1357 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • MYSQL - Retrieve Results by Date Proximity

    - by aSeptik
    Hi All! ;-) my question is: assuming we have a calendar_table with UNIX datestamp date_column, i want retrieve the event with the closest proximity to the today date. So, for example if there is no event today, keep the closest one based on it's date! Thanks All you guys! ;-)

    Read the article

  • Sorting MySQL results within a resultset

    - by InnateDev
    I have a resultset of lets say 10 results. 3 of them have a type 'Pears', and the next 3 have a type 'Apples' and the next three have a type of 'Bananas'. The last record has a type of 'Squeezing Equipment' - unrelated to the fruits. How do I return this set of results (for pagination too) in a GROUPED order that I specify WITHOUT using any inherent sort factor like ALPHABETA or ID etc? I have the all types at my disposal before running the code so they can be specified. i.e. ID | Bananas ID | Bananas ID | Bananas ID | Apples ID | Apples ID | Apples ID | Pears ID | Pears ID | Pears ID | Squeezing Equipment

    Read the article

  • Problem importing mysql triggers generated from mysqldump

    - by OM The Eternity
    I am using phpmyadmin for using the mysqldump query, but as per my requirement i have to create a new database which is clone of the previous one, now in this case when i import the main DB it contain all the trigger information as well with the DB name mentioned in it.. As i import this DB to new one my triggers get imported as well but the trigger_schema are not changed as per new DB.. What could be done to get resolve this problem?

    Read the article

  • mysql - count rows by field

    - by Qiao
    all rows in table have type field. It is either 0 or 1. I need to count rows with 0 and with 1 in one query. So that result is something like: type0 type1 1234 4211 How it can be implemented?

    Read the article

  • Incorrect table name, php/mysql

    - by user296516
    Hi guys, I've got this code mysqli_query ( $userdatabase, 'CREATE TABLE `user_'.$emailreg.'` ( ID int NOT NULL AUTO_INCREMENT PRIMARY KEY, IP varchar(10), FLD1 varchar(20), FLD2 varchar(40), FLD3 varchar(25), FLD4 varchar(25), FLD5 varchar(25) )' ); echo ( mysqli_error ($userdatabase) ); that works fine on my localhost, but when I upload it to the server, it starts giving me a "Incorrect table name '[email protected]'" error. any idea? Thanks!

    Read the article

  • mySQL one-to-many query

    - by Stomped
    I've got 3 tables that are something like this (simplified here ofc): users user_id user_name info info_id user_id rate contacts contact_id user_id contact_data users has a one-to-one relationship with info, although info doesn't always have a related entry. users has a one-to-many relationship with contacts, although contacts doesn't always have related entries. I know I can grab the proper 'users' + 'info' with a left join, is there a way to get all the data I want at once? For example, one returned record might be: user_id: 5 user_name: tom info_id: 1 rate: 25.00 contact_id: 7 contact_data: 555-1212 contact_id: 8 contact_data: 555-1315 contact_id: 9 contact_data: 555-5511 Is this possible with a single query? Or must I use multiple?

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • mysql result set joining existing table

    - by Yang
    is there any way to avoid using tmp table? I am using a query with aggregate function (sum) to generate the sum of each product: the result looks like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_5 | 300 now i want to join the above result to another table called products. so that i will have a summary like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_3 | 0 product_4 | 0 product_5 | 300 i know 1 way of doing this is the dump the 1st query result to a temp table then join it with products table. is there a better way?

    Read the article

  • getting a date array from a mysql database?

    - by user296516
    Hi guys, I have a database with date field is this format "2010.06.11. | 10:26 13" What is need is a php array that would hold all the different dates, .i.e. array[0] = "2010.06.09." array[1] = "2010.06.10." array[2] = "2010.06.11." Currently I am doing it by selecting the whole table, then looping through the result and adding the date substr to an array if it is not already there. But maybe there is a faster way? Thanks.

    Read the article

  • MySQL: updating a row and deleting the original in case it becomes a duplicate

    - by Silvio Donnini
    I have a simple table made up of two columns: col_A and col_B. The primary key is defined over both. I need to update some rows and assign to col_A values that may generate duplicates, for example: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 This statement sometimes yields a duplicate key error. I don't want to simply ignore the error with UPDATE IGNORE, because then the rows that generate the error would remain unchanged. Instead, I want them to be deleted when they would conflict with another row after they have been updated I'd like to write something like: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 ON DUPLICATE KEY REPLACE which unfortunately isn't legal in SQL, so I need help finding another way around. Also, I'm using PHP and could consider a hybrid solution (i.e. part query part php code), but keep in mind that I have to perform this updating operation many millions of times. thanks for your attention, Silvio Reminder: UPDATE's syntax has problems with joins with the same table that is being updated

    Read the article

  • Mysql storing quotes as &#39;

    - by Click Upvote
    I have some PHP code which stores whatever is typed in a textbox in the databse. If I type in bob's apples, it gets stored in the database as bob&#39;s apples. What can be the problem? The table storing this has the collation of latin1_swedish_ci.

    Read the article

  • MySQL search Chinese characters

    - by Jasie
    Hello, Let's say I have a row: ??????? Someone enters as a query: ?? Should I break up the characters in the query, and individually perform a LIKE % % match on each character against the row, or is there any easier way to get a row that contains one of the two characters? FULLTEXT won't work with CJK characters. Thanks!

    Read the article

  • mysql union query

    - by Sergio
    The table that contains information about members has a structure like: id | fname | pic | status -------------------------------------------------- 1 | john | a.jpg | 1 2 | mike | b.jpg | 1 3 | any | c.jpg | 1 4 | jacky | d.jpg | 1 Table for list of friends looks like: myid | date | user ------------------------------- 1 | 01-01-2011 | 4 2 | 04-01-2011 | 3 I want to make a query that will as result print users from "friendlist" table that contains photos and names of that users from "members" table of both, myid (those who adding) and user (those who are added). That table in this example will look like: myid | myidname | myidpic | user | username | userpic | status ----------------------------------------------------------------------------------- 1 | john | a.jpg | 4 | jacky | d.jpg | 1 2 | mike | b.jpg | 3 | any | c.jpg | 1

    Read the article

  • mySQL not saving data?

    - by tony noriega
    i have a PHP contact form that submits data, and an email...: <?php $dbh=mysql_connect ("localhost", "username", "password") or die ('I cannot connect to the database because: ' . mysql_error()); mysql_select_db ("guest"); if (isset($_POST['submit'])) { if (!$_POST['name'] | !$_POST['email']) { echo"<div class='error'>Error<br />Please provide your Name and Email Address so we may properly contact you.</div>"; } else { $age = $_POST['age']; $name = $_POST['name']; $gender = $_POST['gender']; $email = $_POST['email']; $phone = $_POST['phone']; $comments = $_POST['comments']; $query = "INSERT INTO contact_us (age,name,gender,email,phone,comments) VALUES ('$age','$name','$gender','$email','$phone','$comments')"; mysql_query($query); mysql_close(); $yoursite = "Mysite "; $youremail = $email; $subject = "Website Guest Contact Us Form"; $message = "$name would like you to contact them Contact PH: $phone Email: $email Age: $age Gender: $gender Comments: $comments"; $email2 = "[email protected]"; mail($email2, $subject, $message, "From: $email"); echo"<div class='thankyou'>Thank you for contacting us,<br /> we will respond as soon as we can.</div>"; } } ?> The email is coming through fine, but the data is not storing the dbase... am i missing something? Its the same script as i use on another contact us page, only difference is instead of parsing the data on teh same page, i now send this data to a "thankyou.php" page... i tried changing $_POST to $_GET but that killed the page... what am i doing wrong?

    Read the article

  • MySQL database design question

    - by Greelmo
    I'm trying to weigh the pros and cons of a database design, and would like to get some feedback as to the best approach. Here is the situation: I have users of my system that have only a few required items (username, password). They can then supply a lot of optional information. This optional information continues to grow as the system grows, so I want to do it in such a way that adding new optional information is easy. Currently, I have a separate table for each piece of information. For example, there's a table called 'names' that holds 'user_id', 'first_name', and 'last_name'. There's 'address', 'occupation', etc. You get the drift. In most cases, when I talk to my database, I'm looking only for users with one particular qualifier (name, address, etc.). However, there are instances when I want to see what information a user has set. The 'edit account' page, for example, must run queries for each piece of information it wants. Is this wasteful? Is there a way I can structure my queries or my database to make it so I never have to do one query for each piece of information like that without getting my tables to huge? If i want to add 'marital status', how hard will that be if I don't have a one-table-per-attribute system? Thanks in advance.

    Read the article

  • Non distinct Unique ID in MySQL database table.

    - by Geoff
    First of, a simplified version: I am wondering if I can create a trigger to activate during INSERT (it's actually LOAD DATA INFILE) and NOT enter records for an RMA already in my table? I have a table that has no records that are unique. Some may be duplicates but there is one field that I can use to know if the data has been entered or not. For instance RMA Op Days --------------------- 213 Repair 0.10 213 Test 0.20 213 Repair 0.10 So I could do an index on the three columns together but as you see it's possible for an RMA to be in a step for the same amount of time twice so it's possible to have duplicate records. This data comes from a report that I cannot edit and this is all it provides. The key is that an RMA's data is only in the report once so if my database already has that RMA in it's records I want to skip the loading of that RMA's records from the report. By all means please let me know if that didn't make sense, I'll Explain as needed. I'm sure it's not uncommon but I couldn't find anything on the net.

    Read the article

  • mysql concat all field table

    - by hafizan
    Is there a way we can concat all field in the table(1 sql statement)(automatic) ? The reason was before user updated or delete a record,the record will push to another table for future reference.

    Read the article

  • JDBC with MySQL

    - by Josh K
    I'm working on getting my database to talk to my Java programs. What do I need to get started? Having already read through (and been thoroughly confused, something that does not happen often) with some other turorials I figured I'd best ask here. How do I import a jar file from the local directory? Can someone give me a quick and dirty sample program using the JDBC?

    Read the article

  • Newbie question - MySQL index size

    - by Tommy
    I've just started to investigating how I should optimize my database. Indexing seems to be a good idea, so I want to index a VARCHAR column, the engine is MyISAM. From what I've read, I understand that an index is limited to a size of 1000 bytes. A VARCHAR character is 3 bytes in size. Does this mean that if I want to index a VARCHAR column with 50 rows, I need an index prefix of 6 characters? I came to that number by dividing 1000 with the row number 50, then the bytesize per character that is 3. 1000/50/3=6,66. It seems a little complicated, so I'm just wondering if I'm thinking right? It seems weird to me that you'd only be able to index 333 rows in a VARCHAR column, using a prefix of 1 character.

    Read the article

  • Problems getting foreign keys working in MySQL

    - by thehuby
    I've been trying to get a delete to cascade and it just doesn't seem to work. I'm sure I am missing something obvious, can anyone help me find it? I would expect a delete on the 'articles' table to trigger a delete on the corresponding rows in the 'article_section_lt' table. CREATE TABLE articles ( id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL UNIQUE, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT, date DATE NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP )ENGINE=INNODB; CREATE TABLE article_sections ( /* blog, news etc */ id INTEGER UNSIGNED PRIMARY KEY AUTO_INCREMENT, url_stub VARCHAR(255) NOT NULL UNIQUE, h1 VARCHAR(60) NOT NULL, title VARCHAR(60) NOT NULL, description VARCHAR(150) NOT NULL, summary VARCHAR(150) NOT NULL DEFAULT "", html_content TEXT NOT NULL DEFAULT "" )ENGINE=INNODB; CREATE TABLE article_section_lt ( fk_article_id INTEGER UNSIGNED NOT NULL REFERENCES articles(id) ON DELETE CASCADE, fk_article_section_id INTEGER UNSIGNED NOT NULL )ENGINE=INNODB;

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >