Search Results

Search found 14310 results on 573 pages for 'mysql sock'.

Page 361/573 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Select distinct... in fulltext search

    - by lam3r4370
    <?php session_start(); $user =$_GET['user']; $conn = mysql_connect("localhost","...","..."); mysql_select_db("..."); $sql= "SELECT filter FROM userfilter WHERE user='$user'"; $mksql = mysql_query($sql); while($row =mysql_fetch_assoc($mksql)) { $filter=$row['filter']; $sql2 = "SELECT DISTINCT * FROM rss WHERE MATCH(content,title) AGAINST ('$filter')"; $mksql2 = mysql_query($sql2) or die(mysql_error()); while($rows=mysql_fetch_assoc($mksql2)) { echo ..... } ?> If I have two rows content that contains the $filter ,it outputs me that content but it's repeating. For example: title|content asd |This is a sample content ,number one das |This is a sample content ,number two .... And if my keywords are "sample" and "number" ,it outputs me twice the title and the content.How to prevent that?

    Read the article

  • need code for show result inside table html

    - by klox
    dear all..i have a textfield <tr> <td> <td><input type="text" id="model_name"></td> </td> </tr> and a cell <tr> <td><div id="value">//i want data show here after fill textfield</div> </td> </tr> beside that, i've a table "settingdata" in database it consist of 2 field:itemdata and remark.. itemdata's value are "UD" and remark's value are "FM=87.5-108.0MHZ"... what must i do if i want after type model name "car01UD" at textfield inside <div id="value"></div> can show "FM=87.5-108.0mhz"...

    Read the article

  • Is the time cost constant when bulk inserting data into an indexed table?

    - by SiLent SoNG
    I have created an archive table which will store data for selecting only. Daily there will be a program to transfer a batch of records into the archive table. There are several columns which are indexed; while others are not. I am concerned with time cost per batch insertion: - 1st batch insertion: N1 - 2nd batch insertion: N2 - 3rd batch insertion: N3 The question is: will N1, N2, and N3 roughly be the same, or N3 N2 N1? That is, will the time cost be a constant or incremental, with existence of several indexes? All indexes are non-clustered. The archive table structure is this: create table document ( doc_id int unsigned primary key, owner_id int, -- indexed title smalltext, country char(2), year year(4), time datetime, key ix_owner(owner_id) }

    Read the article

  • How to name uploaded files in php to prevent them from being overwritten?

    - by user156814
    I'm trying to add user submitted articles to my website, (only for admins). With each article comes an option to upload up to 3 images. My database is set up like this Articles id user_id title body date_added last_edited Photos id (auto_increment) article_id First I save the article in the database, then I upload the photo (temporarily) then I create a new photo record in the database saving the article_id. Then I rename the uploaded photo to be the same as the primary key of the photo record, and to be a png. $filename = $photo->id . '.png'; I figured this would be a good way to prevent files form being overwritten. This seems flawed to me. Any suggestions on how I should save my records and photos? Thanks

    Read the article

  • Query multiple currencies

    - by TiuTalk
    I need store multiple currencies on my database... Here's the problem: Example Tables: [ Products ] id (INT, PK) name (VARCHAR) price (DECIMAL) currency (INT, FK) [ Currencies ] id (INT, PK) name (VARCHAR) conversion (DECIMAL) # To U$ I'll store the product price with the currency selected by the user... Later I need to search the products using a price interval like "Search products with price from U$ 50 to U$ 100" and I need the system convert these values "on the fly" to run the SQL Query and filter the products. And I really don't know how to make this query... :/

    Read the article

  • Concatinating Multiple Rows in SQL

    - by Dave C
    Hello, I have a table structure that looks like this: ID String ----------- 1 A 1 Test 1 String 2 Dear 2 Person I need the final output to look like this: ID FullString -------------------- 1 A, Test, String 2 Dear, Person I am really lost on how to approach this... I looked on a couple examples online but they seemed to be VERY complex... this seems like it should be a real easy problem to solve in sql. Thank you for all assistance!

    Read the article

  • Define keys in temporary table creation

    - by imperium2335
    How do I define the keys for a temporary table that is being created from a SELECT statement? I have: CREATE temporary TABLE _temp_unique_parts_trading engine=memory AS (SELECT parts_trading.enquiryref, sellingcurrency, jobs.id AS jobID FROM parts_trading, jobs WHERE jobs.enquiryref = parts_trading.enquiryref GROUP BY parts_trading.enquiryref) But where do I define the keys?

    Read the article

  • Trouble creating stored procedure

    - by MatW
    I'm messing around with stored procedures for the first time, but can't even create a simple select! I'm using phpMyAdmin and this is my SQL: DELIMITER // CREATE PROCEDURE test_select() BEGIN SELECT * FROM products LIMIT 10; END // DELIMITER ; After submitting that, my localhost does some thinking for a loooong time and eventually loads a page with no content called /phpmyadmin/import.php. After reloading phpMyAdmin and trying to invoke the procedure: CALL test_select(); I get a "PROCEDURE doesn't exist" error. Any ideas?

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • How do I replace NOT EXISTS with JOIN?

    - by YelizavetaYR
    I've got the following query: select distinct a.id, a.name from Employee a join Dependencies b on a.id = b.eid where not exists ( select * from Dependencies d where b.id = d.id and d.name = 'Apple' ) and exists ( select * from Dependencies c where b.id = c.id and c.name = 'Orange' ); I have two tables, relatively simple. The first Employee has an id column and a name column The second table Dependencies has 3 column, an id, an eid (employee id to link) and names (apple, orange etc). the data looks like this Employee table looks like this id | name ----------- 1 | Pat 2 | Tom 3 | Rob 4 | Sam Dependencies id | eid | Name -------------------- 1 | 1 | Orange 2 | 1 | Apple 3 | 2 | Strawberry 4 | 2 | Apple 5 | 3 | Orange 6 | 3 | Banana As you can see Pat has both Orange and Apple and he needs to be excluded and it has to be via joins and i can't seem to get it to work. Ultimately the data should only return Rob

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Incorrect string encodings

    - by James
    Note: I have read all of the related PHP, UTF-8, character encoding articles that are usually suggested, but my question relates to data inserted before I applied such techniques. I am wishing to retrospectively fix all character encoding problems. Now all connections are set as utf8 using PDO. PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8' Unfortunately, a large amount of data was inserted that is of questionable encoding before I had implemented correct character encoding practices. As displayed by: $sql = "SELECT name FROM data LIMIT 3"; foreach ($pdo->query($sql) as $row) { $name = $row['name']; echo $name . "\n"; echo utf8_encode($name) . "\n"; echo utf8_decode($name) . "\n"; echo htmlspecialchars($name, ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_encode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_decode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo '<hr/>'; } Which produces: Antonín Dvořák AntonÃÆÃ­n DvoÃâ¦Ãâ¢ÃÆÃ¡k Anton??­n Dvo??????¡k Antonín Dvořák AntonÃÆÃ­n DvoÃâ¦Ãâ¢ÃÆÃ¡k ---------- Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ????? ?????????? Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ---------- Tiësto Tiësto Tiësto Tiësto Tiësto Tiësto ---------- When removing 'SET NAMES utf8' with PDO it produces the data: Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák ---------- ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ---------- Tiësto Tiësto Ti?sto Tiësto Tiësto ---------- And here is a dump of the database rows concerned: DROP TABLE IF EXISTS `data`; CREATE TABLE IF NOT EXISTS `data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `name` (`name`(10)), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; INSERT INTO `data` (`id`, `name`) VALUES (0, 'Antonín Dvořák'), (1, 'Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶'), (2, 'Tiësto'); The 3rd and 6th lines of the 3rd row "Tiësto" are then correctly echoed. I'm just unsure what is the best way to correct encodings/detect the encodings of bad strings and correct, etc.

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Complex query with two tables and multilpe data and price ranges

    - by TiuTalk
    Let's suppose that I have these tables: [ properties ] id (INT, PK) name (VARCHAR) [ properties_prices ] id (INT, PK) property_id (INT, FK) date_begin (DATE) date_end (DATE) price_per_day (DECIMAL) price_per_week (DECIMAL) price_per_month (DECIMAL) And my visitor runs a search like: List the first 10 (pagination) properties where the price per day (price_per_day field) is between 10 and 100 on the period for 1st may until 31 december I know thats a huge query, and I need to paginate the results, so I must do all the calculation and login in only one query... that's why i'm here! :)

    Read the article

  • Hibernate deletion issue

    - by muffytyrone
    I'm trying to write a Java app that imports a data file. The process is as follows Create Transaction Delete all rows from datatable Load data file into datatable Commit OR Rollback if any errors were encountered. The data loaded in step 3 is mostly the same as the data deleted in step3. The deletion is performed using the following DetachedCriteria criteria = DetachedCriteria.forClass(myObject.class); List<myObject> myObjects = hibernateTemplate.findByCriteria(criteria); hibernateTemplate.deleteAll(myObjects); When I then load the datafile, i get the following exception nested exception is org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session: The whole process needs to take place in transaction. And I don't really want to have to compare the import file / data table and then perform an insert/update/delete to get them into sync. Any help would be appreciated.

    Read the article

  • alter mysqldump file before import

    - by julio
    Hi-- I have a mysqldump file created from an earlier version of a product that can't be imported into a new version of the product, since the db structure has changed slightly (mainly altering a column that was NOT NULL DEFAULT 0 to UNIQUE KEY DEFAULT NULL). If I just import the old dump file, it will error out since the column that has default values of 0 now breaks the UNIQUE constraint. It would be easy enough to either manually alter the mysqldump file, or import into a temp table and change it, then copy to the new table. However, is there a way to do this programatically, so it will be repeatable and not manual? (this will need to happen for many instances of this product). I'm thinking something like disabling key constraints for the import, then setting all values that = 0 to NULL, then re-enabling the key constraints? Is this possible? Any help appreciated.

    Read the article

  • query excuting problem

    - by srini-r85
    hi, i tried to execute following query in php script. $db_selected = mysql_select_db("lumiinc1_sndemo1", $con); if ($db_selected) { echo "database connected"; } else { die ("Can\'t use db : " . mysql_error()); } $sql = "INSERT INTO `markers` ( `name`, `address`, `lat`, `lng`, `id` ) SELECT `name`, `street`, `latitude`, `longitude`, `lid` FROM `location` WHERE NOT EXISTS ( SELECT * FROM `markers` WHERE `location`.`lid` = `markers`.`id` )"; $result = mysql_query($sql); if ($result) { echo "Query executed OK"; } else { die("Invalid query: " . mysql_error()); } script does not show any error.also query executed.but i didn't get my expected result.at the same i try this query in phpmyAdmin i got my expected result. i dont know the cause of this problem. plz any one find the problem . thanks

    Read the article

  • I can't insert data into my database

    - by Ken
    I don't know why, but my data doesn't go into my database 'users' with the table 'data'. <html> <body> <?php date_default_timezone_set("America/Los_Angeles"); include("mainmenu.php"); $con = mysql_connect("localhost", "root", "g00dfor@boy"); if(!$con){ die(mysql_error()); } $usrname = $_POST['usrname']; $fname = $_POST['fname']; $lname = $_POST['lname']; $password = $_POST['password']; $email = $_POST['email']; mysql_select_db(`users`, $con) or die(mysql_error()); mysql_query(INSERT INTO `users`.`data` (`id`, `usrname`, `fname`, `lname`, `email`, `password`) VALUES (NULL, '$usrname', '$fname', '$lname', '$email', 'password')) or die(mysql_error()); mysql_close($con) echo("Thank you for registering!"); ?> </body> </html> All i get is a blank page.

    Read the article

  • Flexible forms and supporting database structure

    - by sunwukung
    I have been tasked with creating an application that allows administrators to alter the content of the user input form (i.e. add arbitrary fields) - the contents of which get stored in a database. Think Modx/Wordpress/Expression Engine template variables. The approach I've been looking at is implementing concrete tables where the specification is consistent (i.e. user profiles, user content etc) and some generic field data tables (i.e. text, boolean) to store non-specific values. Forms (and model fields) would be generated by first querying the tables and retrieving the relevant columns - although I've yet to think about how I would setup validation. I've taken a look at this problem, and it seems to be indicating an EAV type approach - which, from my brief research - looks like it could be a greater burden than the blessings it's flexibility would bring. I've read a couple of posts here, however, which suggest this is a dangerous route: How to design a generic database whose layout may change over time? Dynamic Database Schema I'd appreciate some advice on this matter if anyone has some to give regards SWK

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >