Search Results

Search found 298 results on 12 pages for 'truncate'.

Page 4/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • Strange type-related error

    - by vsb
    I wrote following program: isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] primes = filter isPrime [1 .. ] it should construct list of prime numbers. But I got this error: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:3:16: Ambiguous type variable `a' in the constraints: `Floating a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `RealFrac a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `Integral a' arising from a use of `isPrime' at 7/main.hs:3:16-22 Possible cause: the monomorphism restriction applied to the following: primes :: [a] (bound at 7/main.hs:3:0) Probable fix: give these definition(s) an explicit type signature or use -XNoMonomorphismRestriction Failed, modules loaded: none. If I specify signature for isPrime function explicitly: isPrime :: Integer -> Bool isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] I can't even compile isPrime function: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:2:45: No instance for (RealFrac Integer) arising from a use of `truncate' at 7/main.hs:2:45-61 Possible fix: add an instance declaration for (RealFrac Integer) In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] In a stmt of a list comprehension: i <- [2 .. truncate (sqrt x)] 7/main.hs:2:55: No instance for (Floating Integer) arising from a use of `sqrt' at 7/main.hs:2:55-60 Possible fix: add an instance declaration for (Floating Integer) In the first argument of `truncate', namely `(sqrt x)' In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] Failed, modules loaded: none. Can you help me understand, why am I getting these errors?

    Read the article

  • Why does output of fltk-config truncate arguments to gcc?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done? And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()? EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?

    Read the article

  • Is it okay to truncate a SHA256 hash to 128 bits?

    - by Sunny Hirai
    MD5 and SHA-1 hashes have weaknesses against collision attacks. SHA256 does not but it outputs 256 bits. Can I safely take the first or last 128 bits and use that as the hash? I know it will be weaker (because it has less bits) but otherwise will it work? Basically I want to use this to uniquely identify files in a file system that might one day contain a trillion files. I'm aware of the birthday problem and a 128 bit hash should yield about a 1 in a trillion chance on a trillion files that there would be two different files with the same hash. I can live with those odds. What I can't live with is if somebody could easily, deliberately, insert a new file with the same hash and the same beginning characters of the file. I believe in MD5 and SHA1 this is possible.

    Read the article

  • How to use JQuery to truncate the contents of option tags?

    - by George
    Hello! Please take a look here: http://www.binarymark.com/Products/FLVDownloader/order.aspx What I am trying to do is to get rid of the prices inside the option tag. On that page you can see a drop-down box under Order Information, Product. I want to remove the prices from all the options that contain them in that box, so get rid of " - $75.98" for example. I am not used to JQuery, but I realize it would be possible - just not sure how to do it, so your help would be greatly appreciated. Thanks. George

    Read the article

  • How can I truncate the mangled C++ identifiers shown by GDB's disassemble command?

    - by Rhys Ulerich
    GDB's disassemble command is nice for short C identifiers, e.g. main. For long, mangled C++ identifiers the verbosity is overkill. For example, using icpc I see results like (gdb) disassemble 0x49de2f 0x49de5b Dump of assembler code from 0x49de2f to 0x49de5b: 0x000000000049de2f <_ZN5pecos8suzerain16fftw_multi_array6detail18c2c_buffer_processIPA2_dPKSt7complexIdEilNS2_26complex_copy_differentiateIS4_EEEEvT_T1_T2_T0_SD_SE_RKT3_+167>: mov 0x18(%rsp),%rsi Displays that long are annoying in the CLI. They make GDB's TUI assembly display all but useless. Is there a way to tell GDB to show a truncated identifier? Say clip all but 50 characters?

    Read the article

  • How to truncate the string into required length in iphone sdk?

    - by monish
    Hi guys, I am having a string which contains more the 25 characters; NSString *str = @"This is the string to be truncated to 15 characters only"; In the above string I need only the 15 characters to be stored in another variable after truncation. can anyone suggest me how to do this? Anyone's help will be much appreciated. Thank you, Monish

    Read the article

  • XML/PHP : Content is not allowed in prolog

    - by Tristan
    Hello, i have this message error and i don't know where does the problem comes from: <?php include "DBconnection.class.php"; $sql = DBConnection::getInstance(); $requete = "SELECT g.siteweb, g.offreDedie, g.coupon, g.only_dedi, g.transparence, g.abonnement , s.GSP_nom as nom , COUNT(s.GSP_nom) as nb_votes, TRUNCATE(AVG(vote), 2) as qualite, TRUNCATE(AVG(prix), 2) as rapport, TRUNCATE(AVG(serviceClient), 2) as serviceCli, TRUNCATE(AVG(interface), 2) as interface, TRUNCATE(AVG(services), 2) as services FROM votes_serveur AS v INNER JOIN serveur AS s ON v.idServ = s.idServ INNER JOIN gsp AS g ON s.GSP_nom = g.nom WHERE s.valide = 1 GROUP BY s.GSP_nom"; $sql->query($requete); $xml = '<?xml version="1.0" encoding="UTF-8" ?>'; $xml .='<GamerCertified>'; while($row = $sql->fetchArray()){ $moyenne_services = ($row['services'] + $row['serviceCli'] + $row['interface'] ) / 3 ; $moyenne_services = round( $moyenne_services, 2); $moyenne_ge = ($row['services'] + $row['serviceCli'] + $row['interface'] + $row['qualite'] + $row['rapport'] ) / 5 ; $moyenne_ge = round( $moyenne_ge, 2); $xml .= '<GSP>'; $xml .= '<nom>'.$row["nom"].'</nom>'; $xml .= '<nombre-votes>'.$row["nb_votes"].'</nombre-votes>'; $xml .= '<services>'.$moyenne_services.'</services>'; $xml .= '<qualite>'.$row["qualite"].'</qualite>'; $xml .= '<prix>'.$row["rapport"].'</prix>'; $xml .= '<label-transparence>'.$row["transparence"].'</label-transparence>'; $xml .= '<moyenne-generale>'.$moyenne_ge.'</moyenne-generale>'; $xml .= '<serveurs-dedies>'.$row["offreDedie"].'</serveurs-dedies>'; $xml .= '</GSP>'; } $xml .= '</GamerCertified>'; echo $xml; Thanks

    Read the article

  • XML/PHP : Content is not allowed in trailing section

    - by Tristan
    Hello, i have this message error and i don't know where does the problem comes from: <?php include "DBconnection.class.php"; $sql = DBConnection::getInstance(); $requete = "SELECT g.siteweb, g.offreDedie, g.coupon, g.only_dedi, g.transparence, g.abonnement , s.GSP_nom as nom , COUNT(s.GSP_nom) as nb_votes, TRUNCATE(AVG(vote), 2) as qualite, TRUNCATE(AVG(prix), 2) as rapport, TRUNCATE(AVG(serviceClient), 2) as serviceCli, TRUNCATE(AVG(interface), 2) as interface, TRUNCATE(AVG(services), 2) as services FROM votes_serveur AS v INNER JOIN serveur AS s ON v.idServ = s.idServ INNER JOIN gsp AS g ON s.GSP_nom = g.nom WHERE s.valide = 1 GROUP BY s.GSP_nom"; $sql->query($requete); $xml = '<?xml version="1.0" encoding="UTF-8" ?><GamerCertified>'; while($row = $sql->fetchArray()){ $moyenne_services = ($row['services'] + $row['serviceCli'] + $row['interface'] ) / 3 ; $moyenne_services = round( $moyenne_services, 2); $moyenne_ge = ($row['services'] + $row['serviceCli'] + $row['interface'] + $row['qualite'] + $row['rapport'] ) / 5 ; $moyenne_ge = round( $moyenne_ge, 2); $xml .= '<GSP>'; $xml .= '<nom>'.$row["nom"].'</nom>'; $xml .= '<nombre-votes>'.$row["nb_votes"].'</nombre-votes>'; $xml .= '<services>'.$moyenne_services.'</services>'; $xml .= '<qualite>'.$row["qualite"].'</qualite>'; $xml .= '<prix>'.$row["rapport"].'</prix>'; $xml .= '<label-transparence>'.$row["transparence"].'</label-transparence>'; $xml .= '<moyenne-generale>'.$moyenne_ge.'</moyenne-generale>'; $xml .= '<serveurs-dedies>'.$row["offreDedie"].'</serveurs-dedies>'; $xml .= '</GSP>'; } $xml .= '</GamerCertified>'; echo $xml; Thanks

    Read the article

  • Disable Foreign key constraint on all tables didn't work ?

    - by Space Cracker
    i try a lot of commands to disable tables constraints in my database to make truncate to all tables but still now it give me the same error Cannot truncate table '' because it is being referenced by a FOREIGN KEY constraint. i try EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all" EXEC sp_MSforeachtable "TRUNCATE TABLE ?" and i tried this for each table ALTER TABLE [Table Name] NOCHECK CONSTRAINT ALL truncate table [Table Name] ALTER TABLE [Table Name] CHECK CONSTRAINT ALL and every time i have the previous error message .. could any please help me to solve sucha a problem ?

    Read the article

  • Working for free

    - by truncate
    Finances are making me take an extended period off of my college education. In my current state, I don't feel fully qualified to be employed by an iPhone software company. While I work on getting things back together, I'd like to try an work for a software company for free in my local area (I'm going to college out of state and have to move back as well). The economy has forced employers to be very picky about who they hire, if any at all. Since I'd like to continue refining my abilities, I was wondering on what the consensus is on working for free. It can't be considered an internship, as I would no longer be in school..., I guess an apprenticeship is more appropriate. Like I said, I don't think I'm qualified to be paid for my services, and I don't want to be. I just don't know how to ask, or if it's even appropriate to ask them to show me how to develop software in the real world. My thinking is that they would be willing to get some work done for free and if I prove myself, they could hire me. If not, there was no major loss. They get some free development, and lose a bit of time helping show me the ropes. I get either a job, or valuable experience that I need. The other alternative is that I try to work out things by myself on the iPhone platform, but that sounds terrifying. I appreciate any input the community has to offer.

    Read the article

  • % confuses python raw sql query

    - by Jonathan
    Following this SO question, I'm trying to "truncate" all tables related to a certain django application using the following raw sql commands in python: cursor.execute("set foreign_key_checks = 0") cursor.execute("select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") for sql in [sql[0] for sql in cursor.fetchall()]: cursor.execute(sql) cursor.execute("set foreign_key_checks = 1") Alas I receive the following error: C:\dev\my_project>my_script.py Traceback (most recent call last): File "C:\dev\my_project\my_script.py", line 295, in <module> cursor.execute(r"select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 18, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 216, in last_executed_query return smart_unicode(sql) % u_params TypeError: not enough arguments for format string Is the % in the LIKE making trouble? How can I workaround it?

    Read the article

  • whats best practice for Log Truncation in SQL Server?

    - by kacalapy
    i have a production DB in SQL server and wanted to put the final touches after the functionality is completed. prior to shipping it out i want to make sure i have some clean up in the SQL server DB and truncate and shrink log files? can i have a nightly job run to truncate logs and shrink files? this is what i have so far: ALTER proc [dbo].[UTIL_ShrinkDB_TruncateLog] as -- exec sp_helpfile BACKUP LOG PMIS WITH TRUNCATE_ONLY DBCC SHRINKFILE (PMIS, 1) DBCC SHRINKFILE (PMIS, 1)

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • MySQL privileges - DROP tables, not databases

    - by Michal M
    Hi, Can someone help me with privileges here. I need to create a user that can DROP tables within databases but cannot DROP the databases? From what I understand from MySQL docs you cannot simply do this: The DROP privilege enables you to drop (remove) existing databases, tables, and views. Beginning with MySQL 5.1.10, the DROP privilege is also required in order to use the statement ALTER TABLE ... DROP PARTITION on a partitioned table. Beginning with MySQL 5.1.16, the DROP privilege is required for TRUNCATE TABLE (before that, TRUNCATE TABLE requires the DELETE privilege). Any ideas?

    Read the article

  • how to warehouse data that is not needed from sql server

    - by I__
    I have been asked to truncate a large table in sql server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • When is it better to offload work to the RDBMS rather than to do it in code?

    - by GeminiDomino
    Okay, I'll cop to it: I'm a better coder than I am at databases, and I'm wondering where thoughts on "best practices" lie on the subject of doing "simple" calculations in the SQL query vs. in the code, such as this MySQL example (I didn't write it, I just have to maintain it!) -- This returns the username, and the users age as of the last event. SELECT u.username as user, IF ((DAY(max(e.date)) - DAY(u.DOB)) &lt; 0 , TRUNCATE(((((YEAR(max(e.date))*12)+MONTH(max(e.date))) -((YEAR(u.DOB)*12)+MONTH(u.DOB)))-1)/12, 0), TRUNCATE((((YEAR(max(e.date))*12)+MONTH(max(e.date))) - ((YEAR(u.DOB)*12)+MONTH(u.DOB)))/12, 0)) AS age FROM users as u JOIN events as e ON u.id = e.uid ... Compared to doing the "heavy" lifting in code: Query: SELECT u.username, u.DOB as dob, e.event_date as edate FROM users as u JOIN events as e ON u.id = e.uid code: function ageAsOfDate($birth, $aod) { //expects dates in mysql Y-m-d format... list($by,$bm,$bd) = explode('-',$birth); list($ay,$am,$ad) = explode('-',$aod); //Insert Calculations here ... return $Dy; //Difference in years } echo "Hey! ". $row['user'] ." was ". ageAsOfDate($row['dob'], $row['edate']) . " when we last saw him."; I'm pretty sure in a simple case like this it wouldn't make much difference (other than the creeping feeling of horror when I have to make changes to queries like the first one), but I think it makes it clearer what I'm looking for. Thanks!

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • Utility that helps in file locking - expert tips wanted

    - by maix
    I've written a subclass of file that a) provides methods to conveniently lock it (using fcntl, so it only supports unix, which is however OK for me atm) and b) when reading or writing asserts that the file is appropriately locked. Now I'm not an expert at such stuff (I've just read one paper [de] about it) and would appreciate some feedback: Is it secure, are there race conditions, are there other things that could be done better … Here is the code: from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN, LOCK_NB class LockedFile(file): """ A wrapper around `file` providing locking. Requires a shared lock to read and a exclusive lock to write. Main differences: * Additional methods: lock_ex, lock_sh, unlock * Refuse to read when not locked, refuse to write when not locked exclusivly. * mode cannot be `w` since then the file would be truncated before it could be locked. You have to lock the file yourself, it won't be done for you implicitly. Only you know what lock you need. Example usage:: def get_config(): f = LockedFile(CONFIG_FILENAME, 'r') f.lock_sh() config = parse_ini(f.read()) f.close() def set_config(key, value): f = LockedFile(CONFIG_FILENAME, 'r+') f.lock_ex() config = parse_ini(f.read()) config[key] = value f.truncate() f.write(make_ini(config)) f.close() """ def __init__(self, name, mode='r', *args, **kwargs): if 'w' in mode: raise ValueError('Cannot open file in `w` mode') super(LockedFile, self).__init__(name, mode, *args, **kwargs) self.locked = None def lock_sh(self, **kwargs): """ Acquire a shared lock on the file. If the file is already locked exclusively, do nothing. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ if self.locked == 'ex': return # would implicitly remove the exclusive lock return self._lock(LOCK_SH, **kwargs) def lock_ex(self, **kwargs): """ Acquire an exclusive lock on the file. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ return self._lock(LOCK_EX, **kwargs) def unlock(self): """ Release all locks on the file. Flushes if there was an exclusive lock. :returns: Lock status from before the call (one of 'sh', 'ex', None). """ if self.locked == 'ex': self.flush() return self._lock(LOCK_UN) def _lock(self, mode, nonblocking=False): flock(self, mode | bool(nonblocking) * LOCK_NB) before = self.locked self.locked = {LOCK_SH: 'sh', LOCK_EX: 'ex', LOCK_UN: None}[mode] return before def _assert_read_lock(self): assert self.locked, "File is not locked" def _assert_write_lock(self): assert self.locked == 'ex', "File is not locked exclusively" def read(self, *args): self._assert_read_lock() return super(LockedFile, self).read(*args) def readline(self, *args): self._assert_read_lock() return super(LockedFile, self).readline(*args) def readlines(self, *args): self._assert_read_lock() return super(LockedFile, self).readlines(*args) def xreadlines(self, *args): self._assert_read_lock() return super(LockedFile, self).xreadlines(*args) def __iter__(self): self._assert_read_lock() return super(LockedFile, self).__iter__() def next(self): self._assert_read_lock() return super(LockedFile, self).next() def write(self, *args): self._assert_write_lock() return super(LockedFile, self).write(*args) def writelines(self, *args): self._assert_write_lock() return super(LockedFile, self).writelines(*args) def flush(self): self._assert_write_lock() return super(LockedFile, self).flush() def truncate(self, *args): self._assert_write_lock() return super(LockedFile, self).truncate(*args) def close(self): self.unlock() return super(LockedFile, self).close() (the example in the docstring is also my current use case for this) Thanks for having read until down here, and possibly even answering :)

    Read the article

  • Can I import sql with php?

    - by shin
    Please bear with me(beginner). I am planning to set up an application where visitors can login and play/mess around with my application. So I want to refresh my php application once a day with a cron job. But I never wrote a cron job script before. I understand that I have to truncate all the tables/data and add the initial data. With phpmyadmin, I can import data. Is there any way I can import sql file after truncate the tables? What is the best way? Do I have to create table and insert all data? Help and resources will be appreciated. Thanks in advance.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >