Search Results

Search found 5 results on 1 pages for 'gmemon'.

Page 1/1 | 1 

  • Linux filesystem suggestion for MySQL with a 100% SELECT workload

    - by gmemon
    I have a MySQL database that contains millions of rows per table and there are 9 tables in total. The database is fully populated, and all I am doing is reads i.e., there are no INSERTs or UPDATEs. Data is stored in MyISAM tables. Given this scenario, which linux file system would work best? Currently, I have xfs. But, I read somewhere that xfs has horrible read performance. Is that true? Should I shift the database to an ext3 file system? Thanks

    Read the article

  • Problem with Sphinx resultset larger than 16 MB in MySQL

    - by gmemon
    Hello All, I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB: 1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523) length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping. I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46. Thanks

    Read the article

  • Resuming MySQL indexing

    - by gmemon
    Hello All, I have been building index on a 200 million row table for almost 14 hours. Due to resource over-consumption on the machine (because of a separate incident), the machine cashed. Clearly, I want to avoid another 14 hours to re-construct the index. Is there a way that I can resume the construction of index from the point (or slightly back) where the machine crashed? I can see the temporary files created. Thanks

    Read the article

  • Are indexes good or bad for a large database?

    - by gmemon
    Hello All, I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes. I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not Thanks

    Read the article

  • Best Linux filesystem for MySQL with a 100% SELECT workload

    - by gmemon
    I have a MySQL database that contains millions of rows per table and there are 9 tables in total. The database is fully populated, and all I am doing is reads i.e., there are no INSERTs or UPDATEs. Data is stored in MyISAM tables. Given this scenario, which linux file system would work best? Currently, I have xfs. But, I read somewhere that xfs has horrible read performance. Is that true? Should I shift the database to an ext3 file system? Thanks

    Read the article

1