Search Results

Search found 278 results on 12 pages for 'mysqldump'.

Page 6/12 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Stored Procedure to create Insert statements in MySql ??

    - by karthik
    I need a storedprocedure to get the records of a Table and return the value as Insert Statements for the selected records. For Instance, The stored procedure should have three Input parameters... 1- Table Name 2- Column Name 3- Column Value If 1- Table Name = "EMP" 2- Column Name = "EMPID" 3- Column Value = "15" Then the output should be, select all the values of EMP where EMPID is 15 Once the values are selected for above condition, the stored procedure must return the script for inserting the selected values. The purpose of this is to take backup of selected values. when the SP returns a value {Insert statements}, c# will just write them to a .sql file. I have no idea about writing this SP, any samples of code is appreicated. Thanks..

    Read the article

  • Can't delete a mySQL table. (Error 1050)

    - by doublejosh
    I have a pesky table that will not delete and it's holding up my dev environment refresh :( I know this table exists. Example... mysql> select * from uc_order_products_qty_vw limit 10; +-----+-------------+---------+---------+---------+---------+ | nid | order_count | avg_qty | sum_qty | max_qty | min_qty | +-----+-------------+---------+---------+---------+---------+ | 105 | 1 | 1.0000 | 1 | 1 | 1 | | 110 | 5 | 1.0000 | 5 | 1 | 1 | | 111 | 1 | 1.0000 | 1 | 1 | 1 | | 113 | 5 | 1.0000 | 5 | 1 | 1 | | 114 | 1 | 1.0000 | 1 | 1 | 1 | | 115 | 1 | 1.0000 | 1 | 1 | 1 | | 117 | 2 | 1.0000 | 2 | 1 | 1 | | 119 | 3 | 1.3333 | 4 | 2 | 1 | | 190 | 5 | 1.0000 | 5 | 1 | 1 | | 199 | 2 | 1.0000 | 2 | 1 | 1 | +-----+-------------+---------+---------+---------+---------+ 10 rows in set (0.00 sec) However when I try to drop it... mysql> DROP TABLE IF EXISTS uc_order_products_qty_vw; Query OK, 0 rows affected, 1 warning (0.00 sec) It doesn't work, the table is still there, and the warning says this... mysql> show warnings limit 1; +-------+------+------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------+ | Note | 1051 | Unknown table 'uc_order_products_qty_vw' | +-------+------+------------------------------------------+ 1 row in set (0.00 sec) Feeling pretty dumbfounded.

    Read the article

  • Moving mysql files across servers

    - by tesmar
    I have a massive MySQL database (around 10 GB), and I need to copy it to a different server (slicehost). I don't want to do a DB dump and reimport b/c I think that would take forever. Is it possible to just move the raw SQL files from one machine to the next, setup an identical mysql server, and flip the switch?

    Read the article

  • group by with 3 diffrent

    - by NN
    I have 2 table and I wanna a query with 3 column result in on of them 2 column with view count and title name and in the other 1 column with type_ and i wanna to grouping type_ with max(view count) and show the them title but i didn't have any idea about grouping expression. i think we can solve in by using sub query but i don't know which column use in group by. 2 table join with this expression class pk=resource key i exam this query: SELECT t.title,j.type_ FROM tags asset t,journal article j where type_ in (select type_ from journal article,tags asset where class pk=resource key group by type_) but the answer was wrong

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • MySQL Import Database Error because of Extended Inserts

    - by Castgame
    Hello all, I'm importing a 400MB(uncompressed) MySQL database. I'm using BIGDUMP, and I am getting this error: Stopped at the line 387. At this place the current query includes more than 300 dump lines. That can happen if your dump file was created by some tool which doesn't place a semicolon followed by a linebreak at the end of each query, or if your dump contains extended inserts. Please read the BigDump FAQs for more infos. I believe the file does contain Extended Inserts, however I have no way to regenerate the database as it has been deleted from the old server. How can I import this database or convert it to be imported? Thanks for any help. Best Nick EDIT: It appears the only viable answer is to separate the extended inserts, but I still need help figuring out how to split the file as the answer below suggests. Please help. Thank you.

    Read the article

  • Faster way to dump mysql

    - by japancheese
    This may be a dumb question, but I was just watching a screencast on MySQL replication, and I learned that a master database doesn't send SQL over to a slave for replication, it actually sends data over in binary, which makes importing extremely fast. I started wondering, "if a database can export and import binary, why do mysqldumps / imports take so long?" Is there a way to get mysql to dump a database in binary in a similar fashion to speed up that process as well?

    Read the article

  • Edit very large sql dump/text file (on linux)

    - by geo
    I have to import a large mysql dump (up to 10G). However the sql dump already predefined with a database structure with index definition. I want to speed up the db insert by removing the index and table definition. That means I have to remove/edit the first few lines of a 10G text file. What is the most efficient way to do this on linux? Programs that require loading the entire file into RAM will be an overkill to me.

    Read the article

  • How to restore a slave from a mysql backup?

    - by robsf
    I'm running MySql 5.1. I have Master and a Slave on 2 machines and I set up replication. I do periodic backup on my slave server. I stop mysql, I copy all the files and I restart mysql. In case I lose the Master, I can set up a new one from the last backup. What If I lose the Slave? Can I restart the slave from the last backup? Am I supposed to keep track of the position of the replication every time I to a backup?

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • Is copying /var/lib/mysql a good alterntive to mysqldump?

    - by kemp
    Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump. are all informations needed contained in that directory? can single tables be imported in another mysql? can there be problems while restoring those files on a (probably slightly) different mysql server version?

    Read the article

  • What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

    - by Josh Schwartzman
    I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1. What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data? As well, when loading the data into the second DB, is it quicker to: 1) pipe the results directly to the second MySQL server instance and use the --compress option or 2) load it from a text file (ie: mysql < my_sql_dump.sql)

    Read the article

  • Empty files generated from running `mysqldump` using PHP

    - by alex
    I keep getting empty files generated from running $command = 'mysqldump --opt -h localhost -u username -p \'password\' dbname > \'backup 2009-04-15 09-57-13.sql\''; command($command); Anyone know what might be causing this? My password has strange characters in it, but works fine with connecting to the db. I've ran exec($command, $return) and outputted the $return array and it is finding the command. I've also ran it with mysqldump > file.sql and the file contains Usage: mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS] For more options, use mysqldump --help So it would seem like the command is working.

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • MySQL: table is marked as crashed

    - by DrStalker
    After a disk full issue one of the MySQL DBs on the server is coming up with the following error when I try to back it up: [root@mybox ~]# mysqldump -p --result-file=/tmp/dbbackup.sql --database myDBname Enter password: mysqldump: Got error: 145: Table './myDBname/myTable1' is marked as crashed and should be repaired when using LOCK TABLES A bit of investigation shows two tables have this issue. What needs to be done to fix up the damaged tables?

    Read the article

  • MySQL Backup - incremental

    - by Tiffany Walker
    I know that you can use mysqldump. I am currently dumping the following way: ${MYSQLDUMP} --single-transaction -u ${MUSER} -h ${MHOST} -p${MPASS} $db | ${GZIP} -9 > $FILE From my understanding this locks the database and prevents any type of use of the database and can even lock up websites. Is there a better way to maybe do daily/hourly backups of the MySQL database should the database be in the 100mbs and even 1gbs in size?

    Read the article

  • Calling an svn update from a php script via a browser is not working

    - by hbt
    Hey guys, I have two scripts. running an update and calling shell_exec('svn update') and shell_exec('svn st') running a mysqldump shell_exec('mysqldump params') The svn script is not running the update command, the svn st is printing results but not the svn update I tried to declare parameters when calling svn update eg 'svn update ' . dir . ' --username myuser --password mypasswd --non-interactive'; -- still nothing Played with most of the params If this is something related to binaries/permissions/groups, I don't see it. The mysqldump command works fine and is producing a file, so why isn't the svn updating the filesystem? Please do not advise using core SVN classes in PHP. This is not an option, I don't have complete control over the server and the module is not available. Thanks for your help, -hbt PS: important thing to mention here. The scripts works when called via the command line. It only fails when called via a web browser.

    Read the article

  • How to assign Value to Key in App.config of C# win app ?

    - by karthik
    I am using the below string in my code : string AAR_FilePath = "\"C:\\MySQL\\MySQL Server 5.0\\bin\\mysqldump\""; which i dont want to hardcore in my code. So i need to use that in my app.config I tried to give the same value as, <add key="Path_SqlDump" value="\"C:\\MySQL\\MySQL Server 5.0\\bin\\mysqldump\""></add> But the above gives me error, because of the quotes. All i need is, i should be able to assign "\"C:\MySQL\MySQL Server 5.0\bin\mysqldump\"" to a string. HOW ?

    Read the article

  • Automatizing the backup of my databases and files with cron

    - by Patrick
    hi, I want to automatize the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" 1) First of all, is this a good approach? 2) It is not clear how to specify the repository and the working copy with svn. 3) How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts Any other tip ? thanks

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >