Search Results

Search found 33911 results on 1357 pages for 'mysql select'.

Page 247/1357 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Reading an XML file and store data to mysql database.

    - by Jack Brown
    Hi I need the following php script to do a currency conversion using a different XML file. Its a script from white-hat design http://www.white-hat-web-design.co.uk/articles/php-currency-conversion.php The script needs to be amended to do the following: 1, The php script downloads every 24 hours an xml file from rss.timegenie.com/foreign_exchange_rates_forex rss.timegenie.com/forex.xml rss.timegenie.com/forex2.xml 2, It then stores the xml file data/contents to a mysql database file ie currency and rate. Any advice would be appreciated.

    Read the article

  • MySQL Table structure of thumb UP & DOWN for comments system ?

    - by Axel
    Hello, i already created a table for comments but i want to add the feature of thumb Up and Down for comments like Digg and Youtube, i use php & mysql and i'm wondering What's the best table scheme to implement that so comments with many likes will be on the top. This is my current comments table : comments(id,user,article,comment,stamp) Note: Only registred will be able to vote, so there isn't need to restrict the votes by IP Thanks

    Read the article

  • Is it possible to keep mysql migration running without keeping connection open?

    - by taw
    ALTER TABLE can easily take a few days - and during this time there's a non-negligible chance of connection getting terminated due to network problems. Is it possible to start ALTER TABLE (or CREATE TABLE ... SELECT ...; or some other very long running query) and leave it running without keeping connection open all the time? (the obvious solution of screen + console mysql client won't easily work as there's no ssh running on that server, only mysqld).

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • Zend Framework: How to download file from mySql Blob field.

    - by Awan
    I am uploading files(any type) in MySql tables's blob field. Now I am able to get binary data from that field and when I print it, it shows binary data in firbug console. But I want to download that file as it was uploaded. How can I convert this binary data into orignal file? How to do it in zend? Thanks

    Read the article

  • Help with mysql sum and group query and managing results for jquery graph.

    - by Scarface
    I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary coordinates format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up, 2=vote down), user_id, timestamp, topic_id What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query( "SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic ON topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval" ); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • Convert MSAccess Project Management Application to PHP/MySQL: Which Methodology?

    - by zzapper
    I've got to convert a not terribly complicated bespoke project management system from MsAccess Application to PHP/MySQL. I've been programming for donkey's years but embarrassingly know practically nothing about modern methodologies. So the old 'learning curve' versus 'improved efficiency' conundrum rears its ugly head once again. Although I've Googled up some stuff I don't want to prejudice your suggestions, where would you start, I'm at your mercy?

    Read the article

  • Mysql question: is there something like IN ALL query?

    - by jaycode
    For example this query: SELECT `variants`.* FROM `variants` INNER JOIN `variant_attributes` ON variant_attributes.variant_id = variants.id WHERE (variant_attributes.id IN ('2','5')) And variant has_many variant_attributes What I actually want to do is to find which variant has BOTH variant attributes with ID = 2 and 5. Is this possible with MySQL? Bonus Question, is there a quick way to do this with Ruby on Rails, perhaps with SearchLogic?

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • How to solve - Illegal mix of collations in mysql?

    - by rocksolid
    Am getting the below error when trying to do a select through a Stored procedure in mysql. Illegal mix of collations (latin1_general_cs,IMPLICIT) and (latin1_general_ci,IMPLICIT) for operation '=' Any idea on what might be going wrong here? The collation of the table is latin1_general_ci and that of the column in the where clause is latin1_general_cs Thanks!

    Read the article

  • mysqldump is not dumping my data

    - by oompahloompah
    I am running mysqldump on Ubuntu Linux (10.0.4 LTS) my mySQL version info is: mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 I used the following command: mysql -u username -p dbname dbname_backup.sql However when I opened the generated .sql file, I saw that most of the tables had only the schema dumped and in the few cases where the actual data was dumped, only 1 or two records were dumped (there are ATLEAST several tens of records in each table). Does anyone know what maybe going on?

    Read the article

  • MySQL - Calculate the net time difference between two date-times while excluding breaks?

    - by John M
    In a MySQL query I am using the timediff/time_to_sec functions to calculate the total minutes between two date-times. For example: 2010-03-23 10:00:00 - 2010-03-23 08:00:00 = 120 minutes What I would like to do is exclude any breaks that occur during the selected time range. For example: 2010-03-23 10:00:00 - 2010-03-23 08:00:00 - (break 08:55:00 to 09:10:00) = 105 minutes Is there a good method to do this without resorting to a long list of nested IF statements?

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >