Search Results

Search found 1864 results on 75 pages for 'dump'.

Page 10/75 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Any freeware/ideas for getting Windows 2008's backups to dump to tape after backing up to disk?

    - by TheCleaner
    I have a Windows 2008 R2 server that is being backed up to an external ISCSI drive nightly. The problem is, we'd like to use our Tape Drive (VXA 320) that Windows sees just fine to take those backups in the "WindowsImageBackup" folder and dump them to tape once a month so that we can at least have something offsite. I really don't want to go through the hassle of licensing BackupExec or similar if possible. All I'm really after is some kind of copy utility that can copy the "WindowsImageBackup" folder over to the tape drive. Ideas? P.S. If by doing this it wouldn't matter for a restore regardless, then let me know, but I would assume I could copy the folder back over to the server and then have Windows Backup find it again.

    Read the article

  • How can I speed up a MySQL retore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

  • How can I speed up a MySQL restore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

  • Elf: Dump symbol value in BSS, DATA or RODATA?

    - by noloader
    I have a ELF shared object with a symbol initialized to a value. I want to know what the value of a symbol is. I know objdump -T will give me the symbol's address and length but I need the value: $ arm-linux-androideabi-objdump -T libcrypto.so.1.0.0 | grep -i FIPS_signature 001a9668 g DO .bss 00000014 FIPS_signature However, hexdump knows nothing about ELF sections, offsets and virtual addresses, so I can't use the information: $ hexdump -v -x -n 0x14 -s 0x001a9668 libcrypto.so.1.0.0 $ How do I dump the value of the symbol? Jeff

    Read the article

  • What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

    - by Josh Schwartzman
    I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1. What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data? As well, when loading the data into the second DB, is it quicker to: 1) pipe the results directly to the second MySQL server instance and use the --compress option or 2) load it from a text file (ie: mysql < my_sql_dump.sql)

    Read the article

  • How do I get callgrind to dump source line information?

    - by Jeremybub
    I'm trying to profile a shared library on GNU/Linux which does real-time audio processing, so performance is important. I run another program which hooks it up to the audio input and output of my system, and profile that with callgrind. Looking at the results in KCacheGrind, I get great information about what functions are taking up most of my time. However, it won't let me look at the line by line information, and instead says I need to compile it with debugging symbols and run the profiling again. The program which I am profiling is not compiled with debug symbols, but the library is. And I know this, because interestingly, source code annotations for cachegrind work fine. When I run callgrind, it says the default is to dump source line information, but it just isn't doing that. Is there some way I could force it to, or figure out what's stopping it?

    Read the article

  • How to restore a file system level copy of a PostgreSQL database (not dump) to a different PC

    - by user782224
    I am new to PostgreSQL. I have to recover a database which was running in widows XP machine. I have the zip folder of postgres. I have extracted postgres installation in a different PC and started a using initDB and created a new database, I was able to login, but I am not able to see any old tables. Would you please post the steps you have used to start server in another windows XP machine and how to recover tables and data in the old data folder?

    Read the article

  • How to get stack dump from crashing ASP.NET process?

    - by Dylan
    An unhandled exception ('System.Net.Sockets.SocketException') occurred in w3wp.exe [9740]. Just-In-Time debugging this exception failed with the following error: Debugger could not be started because no user is logged on. We're getting the above error in the Application log. Is there a way to capture a .NET stack trace that doesn't require user interactivity?

    Read the article

  • Is it possible to dump the names of all the open files in notepad++ to a file?

    - by mark
    So, I dragged and dropped multiple files onto notepad++. The files came from different directories and were selected using different criteria. So, I have many files open in Notepad++. Now I need to have a list of all the open files in another file. Right now, my only option is to script the decisions used to guide me in selecting the files in the first place. Which is probably the best in the long term, but I wonder if there is a quicky one in Notepad++. Some plugin magic or whatever. Suggesting another free editor which has this function is a good option too (not that I am going to ditch notepad++, God forbid)

    Read the article

  • Migrate servers without losing any data / time-limited MySQL dump?

    - by inac
    Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh. Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?

    Read the article

  • Join multiple consecutive SQLite database dump files into 1 common database? Purpose: Search through ENTIRE Chrome Browsing History

    - by porg
    Google Chrome 's default web browsing history search engine only lets you access the records of the recent 100 days. Nevertheless in your application data, Chrome keeps your entire browsing history in SQLite database files, with the file naming scheme of "History Index YYYY-MM". I am looking for a way to search… …through my entire browsing history, …with sophisticated filters (limit search terms to certain fields such as URL, domain, title, body text; wildcard or regex terms, date ranges). … in … …either some ready-made software. eHistory came close, as it can limit terms to fields, but it lacks wildcards/regexes, and has the same limited time horizon as the default search. Beyond that, I could not find any suited Chrome extension or standalone (Mac) app. …or a command line to join multiple SQLite database files into one database, which I can then query (with the full syntax power). In the spirit of the pseudo code below: Preferred this way: sqlite --targetDatabase ChromeHistoryAll --importFiles /path/to/ChromeAppData/History\ Index* --importOnlyYetUnknownFiles Or if my desired feature --importOnlyYetUnknownFiles is not possible (feature could also be called "avoid duplicate imports by checking UIDs"), then by explicitly only importing files, of which I know, that they have yet not been imported into the ChromeHistoryAll database: cd ChromeAppData; sqlite --databaseTarget ChromeHistoryAll --importFiles YetNotImported1 YetNotImported2 YetNotImported3 All my queries I would then perform in the database "ChromeHistoryAll" P.S.: Additional question of general interest: Is there a way to perform a database query in a temporary database which was created on-the-fly from multiple files? Like: sqlite --query="SQL query" --targetDatabase DbAll --DBtemporaryInRAM --importFiles db1 db2 db3 This is surely not applicable for my Chrome question, as these History Index files have a combined file size of 500MB together, thus such a query would be of bad performance. But it could come handy in other situations.

    Read the article

  • What's the best practice for taking MySQL dump, encrypting it and then pushing to s3?

    - by HalogenCreative
    This current project requires that the DB be dumped, encrypted and pushed to s3. I'm wondering what might be some "best practices" for such a task. As of now I'm using a pretty straight ahead method but would like to have some better ideas where security is concerned. Here is the start of my script: mysqldump -u root --password="lepass" --all-databases --single-transaction > db.backup.sql tar -c db.backup.sql | openssl des3 -salt --passphrase foopass > db.backup.tarfile s3put backup/db.backup.tarfile db.backup.tarfile # Let's pull it down again and untar it for kicks s3get surgeryflow-backup/db/db.backup.tarfile db.backup.tarfile cat db.backup.tarfile | openssl des3 -d -salt --passphrase foopass |tar -xvj Obviously the problem is that this script everything an attacker would need to raise hell. Any thoughts, critiques and suggestions for this task will be appreciated.

    Read the article

  • Is it possible (and how if it is) dump two concatenaded disks in a new disk using DD?

    - by pedromarce
    Hi, I have a Lacie enclosure that has a setup with 2 500gb disks configured as 1 drive of 1TB, the only partition created for the whole drive is HFS+ journaled, but the controller in the enclosure is gone and so the drive refuses to mount anymore. I have been able to remove those two disks from the enclosure and connect them using USB ports and a program called R-studio (Raid recovery program) check that the setup the controller in the enclosure was using was both disks concatenated (Not Striped). And so configuring that option in R-studio I could be able to get back all the information. But before I got a license for r-studio for just one use, I would rather buying a new 1TB disk and try to write all the information of those two disks in this new one. I can use Mac or linux machines to do it, and I think it should be ok use DD command in linux to concatenate those two drives into the new one in the right order to get it working again in the new disk and I will reformat the old ones, but I am not sure. So, is it possible in this scenario to write both disks into a new one using DD? Any hints how the command would look? Thanks,

    Read the article

  • How to use bzdiff to find difference between 2 bzipped files with diff -I option?

    - by englebip
    I'm trying to do a diff on MySQL dumps (created with mysqldump and piped to bzip2), to see if there are changes between consecutive dumps. The followings are the tails of 2 dumps: tmp1: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 1:06:50 tmp2: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 0:40:11 When I bzdiff their bzipped version: $ bzdiff tmp?.bz2 10c10 < -- Dump completed on 2011-03-11 1:06:50 --- > -- Dump completed on 2011-03-11 0:40:11 According to the manual of bzdiff, any option passed on to bzdiff is passed on to diff. I therefore looked at the -I option that allows to define a regexp; lines matching it are ignored in the diff. When I then try: $ bzdiff -I'Dump' tmp1.bz2 tmp2.bz2 I get an empty diff. I would like to match as much as possible of the "Dump completed" line, though, but when I then try: $ bzdiff -I'Dump completed' tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.miCJEvX9E8' diff: Try `diff --help' for more information. Same thing happens for some variations: $ bzdiff '-IDump completed' tmp1.bz2 tmp2.bz2 $ bzdiff '-I"Dump completed"' tmp1.bz2 tmp2.bz2 $ bzdiff -'"IDump completed"' tmp1.bz2 tmp2.bz2 If I diff the un-bzipped files there is no problem: $diff -I'^[-][-] Dump completed on' tmp1 tmp2 gives also an empty diff. bzdiff is a shell script usually placed in /bin/bzdiff. Essentially, it parses the command options and passes them on to diff as follows: OPTIONS= FILES= for ARG do case "$ARG" in -*) OPTIONS="$OPTIONS $ARG";; *) if test -f "$ARG"; then FILES="$FILES $ARG" else echo "${prog}: $ARG not found or not a regular file" exit 1 fi ;; esac done [...] bzip2 -cdfq "$1" | $comp $OPTIONS - "$tmp" I think the problem stems from escaping the spaces in the passing of $OPTIONS to diff, but I couldn't figure out how to get it interpreted correctly. Any ideas? EDIT @DerfK: Good point with the ., I had forgotten about them... I tried the suggestion with the multiple level of quotes, but that is still not recognized: $ bzdiff "-I'\"Dump.completed.on\"'" tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.Di7RtihGGL'

    Read the article

  • xbindkeys escape quotes

    - by Danilo Bargen
    How can I escape quotes in .xbindkeysrc commands? Neither of those work. "pacmd dump|awk --non-decimal-data '$1~/set-sink-volume/{system ("pacmd "$1" "$2" "$3+2500)}'" "pacmd dump|awk --non-decimal-data '\$1~/set-sink-volume/{system ("pacmd "\$1" "\$2" "\$3+2500)}'" "pacmd dump|awk --non-decimal-data '\$1~/set-sink-volume/{system (\"pacmd \"\$1\" \"\$2\" \"\$3+2500)}'" "pacmd dump|awk --non-decimal-data '$1~/set-sink-volume/{system (\"pacmd \"$1\" \"$2\" \"$3+2500)}'" (The commands raises the PluseAudio volume level)

    Read the article

  • How Do I Automatically Update My Database Nightly

    - by Russ
    Currently, every day before I start work, I complete the following procedure: ssh to the production server gzip our daily database dump file scp the gzipped dump file over to my computer gunzip the dump file dropdb mydatabase createdb mydatabase psql mydatabase < dump.sql Is it possible (I'm sure it is) to automate this process on Mac OSX? This way it is done by the time I get to work in the morning. If so, what is the quickest and easiest way

    Read the article

  • How Do I Automatically Update My Database Nightly

    - by Russ
    Currently, every day before I start work, I complete the following procedure: ssh to the production server gzip our daily database dump file scp the gzipped dump file over to my computer gunzip the dump file dropdb mydatabase createdb mydatabase psql mydatabase < dump.sql Is it possible (I'm sure it is) to automate this process on Mac OSX? This way it is done by the time I get to work in the morning. If so, what is the quickest and easiest way?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >