Search Results

Search found 298 results on 12 pages for 'truncate'.

Page 10/12 | < Previous Page | 6 7 8 9 10 11 12  | Next Page >

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Truncating a string while storing it in an array in c

    - by Nick
    I am trying to create an array of 20 character strings with a maximum of 17 characters that are obtained from a file named "words.dat". After that the program should truncate the string only showing the first 17 characters and completely ignore the rest of that string. However My question is: I am not quite sure how to accomplish this, can anyone give me some insight on how to accomplish this task? Here is my current code as is: #include <stdio.h> #include <stdlib.h> #define WORDS 20 #define LENGTH 18 char function1(char[WORDS][LENGTH]); int main( void ) { char word_array [WORDS] [LENGTH]; function1(word_array); return ( 0 ) ; } char function1(char word_array[WORDS][LENGTH]) { FILE *wordsfile = fopen("words.dat", "r"); int i = 0; if (wordsfile == NULL) printf("\nwords.dat was not properly opened.\n"); else { for (i = 0; i < WORDS; i++) { fscanf(wordsfile, "%17s", word_array[i]); printf ("%s \n", word_array[i]); } fclose(wordsfile); } return (word_array[WORDS][LENGTH]); } words.dat file: Ninja DragonsFury failninja dragonsrage leagueoflegendssurfgthyjnu white black red green yellow green leagueoflegendssughjkuj dragonsfury Sword sodas tiger snakes Swords Snakes sage Sample output: blahblah@fang:~>a.out Ninja DragonsFury failninja dragonsrage leagueoflegendssu rfgthyjnu white black red green yellow green leagueoflegendssu ghjkuj dragonsfury Sword sodas tiger snakes Swords blahblah@fang:~> What will be accomplished afterwards with this program is: After function1 works properly I will then create a second function name "function2" that will look throughout the array for matching pairs of words that match "EXACTLY" including case . After I will create a third function that displays the 20 character strings from the words.dat file that I previously created and the matching words.

    Read the article

  • Python: eliminating stack traces into library code?

    - by Mark Harrison
    When I get a runtime exception from the standard library, it's almost always a problem in my code and not in the library code. Is there a way to truncate the exception stack trace so that it doesn't show the guts of the library package? For example, I would like to get this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev TypeError: Data values must be of type string or None. and not this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 276, in __setitem__ _DeadlockWrap(wrapF) # self.db[key] = value File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/dbutils.py", line 68, in DeadlockWrap return function(*_args, **_kwargs) File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 275, in wrapF self.db[key] = value TypeError: Data values must be of type string or None.

    Read the article

  • Modular Inverse and BigInteger division

    - by dano82
    I've been working on the problem of calculating the modular inverse of an large integer i.e. a^-1 mod n. and have been using BigInteger's built in function modInverse to check my work. I've coded the algorithm as shown in The Handbook of Applied Cryptography by Menezes, et al. Unfortunately for me, I do not get the correct outcome for all integers. My thinking is that the line q = a.divide(b) is my problem as the divide function is not well documented (IMO)(my code suffers similarly). Does BigInteger.divide(val) round or truncate? My assumption is truncation since the docs say that it mimics int's behavior. Any other insights are appreciated. This is the code that I have been working with: private static BigInteger modInverse(BigInteger a, BigInteger b) throws ArithmeticException { //make sure a >= b if (a.compareTo(b) < 0) { BigInteger temp = a; a = b; b = temp; } //trivial case: b = 0 => a^-1 = 1 if (b.equals(BigInteger.ZERO)) { return BigInteger.ONE; } //all other cases BigInteger x2 = BigInteger.ONE; BigInteger x1 = BigInteger.ZERO; BigInteger y2 = BigInteger.ZERO; BigInteger y1 = BigInteger.ONE; BigInteger x, y, q, r; while (b.compareTo(BigInteger.ZERO) == 1) { q = a.divide(b); r = a.subtract(q.multiply(b)); x = x2.subtract(q.multiply(x1)); y = y2.subtract(q.multiply(y1)); a = b; b = r; x2 = x1; x1 = x; y2 = y1; y1 = y; } if (!a.equals(BigInteger.ONE)) throw new ArithmeticException("a and n are not coprime"); return x2; }

    Read the article

  • hibernate: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException

    - by user121196
    when saving an object to database using hibernate, sometimes it fails because certain fields of the object exceed the maximum varchar length defined in the database. There force I am using the following approach: 1. attempt to save 2. if getting an DataException, then I truncate the fields in the object to the max length specified in the db definition, then try to save again. However, in the second save after truncation. I'm getting the following exception: hibernate: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails here's the relevant code, what's wrong? public static void saveLenientObject(Object obj){ try { save2(rec); } catch (org.hibernate.exception.DataException e) { // TODO Auto-generated catch block e.printStackTrace(); saveLenientObject(rec, e); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } private static void saveLenientObject(Object rec, DataException e) { Util.truncateObject(rec); System.out.println("after truncation "); save2(rec); } public static void save2(Object obj) throws Exception{ try{ beginTransaction(); getSession().save(obj); commitTransaction(); }catch(Exception e){ e.printStackTrace(); rollbackTransaction(); //closeSession(); throw e; }finally{ closeSession(); } }

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • Need help with transferring data between MySQL db's using PHP

    - by JM4
    In one of the sites I manage, the client has decided to take on ACH/Bank Account administration where it was previously outsourced. As a result, the information submitted in our online form which used to simply store in a single database for processing now must sit in 'limbo' until the funds used for payment have been verified. My original plan is as follows: At the end of an enrollment, all form data is collected and stored in a single MySQL database. Our internal administrator will receive an email notification reminding him enrollments have taken place. He will process the ACH information collected and wait the 3-4 business days needed for payment to clear. Once the payment information has been returned as Good (haven't considered what I will do with the 'bad' yet), the administrator can log into a secure portal which allows him to click a button to 'process' the full information once compared and verified. the process is simplified as: Enrollment complete: data stored in DB 'A' Funds verified and link clicked: data from 'A' is copied to DB 'B' and 'A' is deleted. I have run similar processes with CSV output before and simply used //transfers old data to archive $transfer = mysql_query('INSERT INTO '.$archive.' SELECT * FROM '.$table) or die(mysql_error()); //empties existing table $query = mysql_query('TRUNCATE TABLE '.$table) or die(mysql_error()); but in those cases, ALL data returned was copied and deleted. I only want to copy and delete a single record. Any idea how to accomplish this?

    Read the article

  • Web UI element to represent two different micro-views of data in the same spot?

    - by Chris McCall
    I've been tasked with laying out a portion of a screen for a customer care (call center) app that serves as sort of a header/summary block at the top of the screen. Here's what it looks like: The important part is in the red box. That little tooltip is the biz's vision for how to represent both the numeric SiteId and the textual Site Name all in the same piece of screen real estate. I asked, and the business thinks the Name is more important than the ID, but lists the Id by default, because the Name can't be truncated in the display, and there's only so much horizontal room to put the data. So they go with the Id, because it's fewer characters, and then they have the user mouse-over the Id to display the name (presumably because the tooltip can be of unlimited width and since it's floating over the rest of the screen, the full name will always be displayed. So, here's my question: Is there some better UI metaphor that I don't know about that could get this job done, while meeting the following constraints?: Does not require the mouse (uses a keyboard shortcut to do the "reveal") Allows the user to copy and paste the name Will not truncate the name Provides for the display of both the ID and name in the same spot Works with IE7

    Read the article

  • what happens when you stop VS debugger?

    - by mare
    If I have a line like this ContentRepository.Update(existing); that goes into datastore repository to update some object and I have a try..catch block in this Update function like this: string file = XmlProvider.DataStorePhysicalPath + o.GetType().Name + Path.DirectorySeparatorChar + o.Slug + ".xml"; DataContractSerializer dcs = new DataContractSerializer(typeof (BaseContentObject)); using ( XmlDictionaryWriter myWriter = XmlDictionaryWriter.CreateTextWriter(new FileStream(file, FileMode.Truncate, FileAccess.Write), Encoding.UTF8)) { try { dcs.WriteObject(myWriter, o); myWriter.Close(); } catch (Exception) { // if anything goes wrong, delete the created file if (File.Exists(file)) File.Delete(file); if(myWriter.WriteState!=WriteState.Closed) myWriter.Close(); } } then why would Visual Studio go on with calling Update() if I click "Stop" in debugging session on the above line? For instance, I came to that line by going line by line pressing F10 and now I'm on that line which is colored yellow and I press Stop. Apparently what happens is, VS goes to execute the Update() method and somehow figures out something gone wrong and goes into "catch" and deletes the file, which is wrong, because I want my catch to work when there is a true exception not when I debug a program and force to stop it.

    Read the article

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

  • How to store and synchronize a big list of strings

    - by Joel
    I have a large database table in SQLExpress on Windows, with a particular field of interest 'code'. I have an Apache web server with MySQL on Linux. The web application on the Linux box needs access to the list of all codes. The only thing it will use the list for is checking for the existence of a given code. Having the Linux server call out to the Windows server is impractical as the Windows server is behind a NAT'ed office internet connection, and it may not always be accessible. I have set it so the Windows server will push the list of codes to the web server by means of a simple HTTP POST request. However, at this point I have not implemented the storage of the codes on the Linux box. Should I store them in a MySQL table with a single field 'code'? Then I get fast indexed lookups O(1), however I think synchronization will be an issue - given an updated list of codes, pushed from the Windows box, how would I optimally synchronize the list with the database? TRUNCATE, followed by INSERT? Should I instead store them in a flat file? Then I have O(n) look up time rather than O(1). Additionally an extra constant-time overhead too, as I will be processing the file in Ruby. However, synchronization is easy - simply replace the file.

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • Undefined method 'size' for #<File:text.txt> (NoMethodError)

    - by user1354456
    This is the code I am running, it does fine until I get to line 15. The command I run is: ruby ex16.rb text.txt. This is a practice sample I'm writing which is meant to be a simple little text editor: filename = ARGV.first script = $0 puts "We're going to erase #{filename}." puts "if you don't want that, hit CTRL-C (^C)." puts "If you do want that, hit RETURN." print "? " STDIN.gets puts "Opening the file..." target = File.open(filename, 'w') puts "Truncating the file. Goodbye!" target.truncate(target.size) puts "Now I'm going to ask you for three lines." print "line 1: "; line1 = STDIN.gets.chomp() print "line 2: "; line2 = STDIN.gets.chomp() print "line 3: "; line3 = STDIN.gets.chomp()undefined method 'size' puts "I'm going to write these to the file." target.write(line1) target.write("\n") target.write(line2) target.write("\n") target.write(line3) target.write("\n") puts "And finally, we close it." target.close()

    Read the article

  • PHP bcompiler install

    - by dobs
    How to install bcompiler on Fedora, have this error: [root@server server]# pecl install channel://pecl.php.net/bcompiler-0.9.1 downloading bcompiler-0.9.1.tgz ... Starting to download bcompiler-0.9.1.tgz (47,335 bytes) .............done: 47,335 bytes 10 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-server/bcompiler-0.9.1 running: /var/tmp/bcompiler/configure checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes .... /var/tmp/bcompiler/bcompiler.c:2174: ?????????: expected ‘struct zend_arg_info *’ but argument is of type ‘const struct _zend_arg_info *’ /var/tmp/bcompiler/bcompiler.c: ? ??????? ‘apc_serialize_zend_class_entry’: /var/tmp/bcompiler/bcompiler.c:3100: ??????????????: ???????????? ???????? ????????????? ?????????? ???? /var/tmp/bcompiler/bcompiler.c:3107: ??????????????: ???????? ????????? 1 ‘apc_serialize_zend_function_entry’ ???????? ????????????? ?????????? ???? /var/tmp/bcompiler/bcompiler.c:2874: ?????????: expected ‘struct zend_function_entry *’ but argument is of type ‘const struct _zend_function_entry *’ /var/tmp/bcompiler/bcompiler.c: ? ??????? ‘apc_deserialize_zend_class_entry’: /var/tmp/bcompiler/bcompiler.c:3324: ??????????????: ???????? ????????? 1 ‘apc_deserialize_zend_function_entry’ ???????? ????????????? ?????????? ???? /var/tmp/bcompiler/bcompiler.c:2900: ?????????: expected ‘struct zend_function_entry *’ but argument is of type ‘const struct _zend_function_entry *’ make: *** [bcompiler.lo] ?????? 1 ERROR: 'make' failed yum install bzip2-libs bzip2-devel - Fix it...

    Read the article

  • Very different I/O performance in C++ on Windows

    - by Mr.Gate
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Very different IO performance in C/C++

    - by Roberto Tirabassi
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Return Json causes save file dialog in asp.net mvc

    - by Eran
    Hi, I'm integrating jquery fullcalendar into my application. Here is the code i'm using: in index.aspx: <script type="text/javascript"> $(document).ready(function() { $('#calendar').fullCalendar({ events: "/Scheduler/CalendarData" }); }); </script> <div id="calendar"> </div> Here is the code for Scheduler/CalendarData: public ActionResult CalendarData() { IList<CalendarDTO> tasksList = new List<CalendarDTO>(); tasksList.Add(new CalendarDTO { id = 1, title = "Google search", start = ToUnixTimespan(DateTime.Now), end = ToUnixTimespan(DateTime.Now.AddHours(4)), url = "www.google.com" }); tasksList.Add(new CalendarDTO { id = 1, title = "Bing search", start = ToUnixTimespan(DateTime.Now.AddDays(1)), end = ToUnixTimespan(DateTime.Now.AddDays(1).AddHours(4)), url = "www.bing.com" }); return Json(tasksList,JsonRequestBehavior.AllowGet); } private long ToUnixTimespan(DateTime date) { TimeSpan tspan = date.ToUniversalTime().Subtract( new DateTime(1970, 1, 1, 0, 0, 0)); return (long)Math.Truncate(tspan.TotalSeconds); } public ActionResult Index() { return View("Index"); } I also have the following code inside head tag in site.master: <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server" /> <link href="<%= Url.Content("~/Content/jquery-ui-1.7.2.custom.css") %>" rel="stylesheet" type="text/css" /> <link href="~Perspectiva/Content/Site.css" rel="stylesheet" type="text/css" /> <link href="~Perspectiva/Content/fullcalendar.css" rel="stylesheet" type="text/css" /> <script src="~Perspectiva/Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="~Perspectiva/Scripts/fullcalendar.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> Everything I did was pretty much copied from http://szahariev.blogspot.com/2009/08/jquery-fullcalendar-and-aspnet-mvc.html When navigating to /scheduler/calendardata I get a prompt for saving the json data which contents are exactly what I created in the CalendarData function. What do I need to do in order to render the page correctly? Thanks in advance, Eran

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • dublicate display of Posts from controller - conditions and joins problem - Ruby on Rails

    - by bgadoci
    I have built a blog application using Ruby on Rails. In the application I have posts and tags. Post has_many :tags and Tag belongs_to :post. In the /views/posts/index.html view I want to display two things. First is a listing of all posts displayed 'created_at DESC' and then in the side bar I am wanting to reference my Tags table, group records, and display as a link that allows for viewing all posts with that tag. With the code below, I am able to display the tag groups and succesfully display all posts with that tag. There are two problems with it thought. /posts view, the code seems to be referencing the Tag table and displaying the post multiple times, directly correlated to how many tags that post has. (i.e. if the post has 3 tags it will display the post 3 times). /posts view, only displays posts that have Tags. If the post doesn't have a tag, no display at all. /views/posts/index.html.erb <%= render :partial => @posts %> /views/posts/_post.html.erb <% div_for post do %> <h2><%= link_to_unless_current h(post.title), post %></h2> <i>Posted <%= time_ago_in_words(post.created_at) %></i> ago <%= simple_format h truncate(post.body, :length => 300) %> <%= link_to "Read More", post %> | <%= link_to "View & Add Comments (#{post.comments.count})", post %> <hr/> <% end %> /models/post.rb class Post < ActiveRecord::Base validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy cattr_reader :per_page @@per_page = 10 end posts_controller.rb def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name] ? { :tags => { :tag_name => params[:tag_name] }} : {} ) ).paginate :page => params[:page], :per_page => 5 respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • How can I take the first 100 characters of html content ( without stripping the TAGS! )

    - by Atomiton
    There are lots of questions on how to strip html tags, but not many on functions/methods to close them. Here's the situation. I have a 500 character Message summary ( which includes html tags ), but I only want the first 100 characters. Problem is if I truncate the message, it could be in the middle of an html tag... which messes up stuff. Assuming the html is something like this: <div class="bd">"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. <br/> <br/>Some Dates: April 30 - May 2, 2010 <br/> <p>Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. <em>Duis aute irure dolor in reprehenderit</em> in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. <br/> </p> For more information about Lorem Ipsum doemdloe, visit: <br/> <a href="http://www.somesite.com" title="Some Conference">Some text link</a><br/> </div> How would I take the first ~100 characters or so? ( Although, ideally that would be the first approximately 100 characters of "CONTENT" ( in between the html tags ) I'm assuming the best way to do this would be a recursive algorithm that keeps track of the html tags and appends any tags that would be truncated, but that may not be the best approach. My first thoughts are using recursion to count nested tags, and when we reach 100 characters, look for the next "<" and then use recursion to write the closing html tags needed from there. The reason for doing this is to make a short summary of existing articles without requiring the user to go back and provide summaries for all the articles. I want to keep the html formatting, if possible. NOTE: Please ignore that the html isn't totally semantic. This is what I have to deal with from my WYSIWYG. EDIT: I added a potential solution ( that seems to work ) I figure others will run into this problem as well. I'm not sure it's the best... and it's probably not totally robust ( in fact, I know it isn't ), but I'd appreciate any feedback

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

< Previous Page | 6 7 8 9 10 11 12  | Next Page >