Search Results

Search found 2896 results on 116 pages for 'comparison operators'.

Page 7/116 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SSI: Failed String Comparison with CGI Environment Variable [migrated]

    - by Calyo Delphi
    I am currently working on developing a personal website. It's not my first time doing this, but this is my first major foray into implementing SSI. I've run myself into a wall, however, with an if-else directive that uses one of the CGI environment variables as part of its comparison. Even after some limited attempts at debugging, all of the output and documentation that I have means that the comparisons being made should fail outright. This is not the case, and the wrong evaluation is being made by the if-else directive. Here's the code in the file index.shtml: <head> <!--#set var="page" value="Home" --> <!--#include file="headlinks.shtml" --> <style> img#ref { float: right; margin-left: 8px; border-width: 0px; } </style> </head> Here's the code in the file headlinks.shtml: <title><!--#echo var="page" --> &ndash; <!--#echo var="HTTP_HOST" --></title> <!--#set var="docroot" value="${DOCUMENT_ROOT}" --> <!--#echo var="docroot" --> <!--#if expr="( $docroot != '/Applications/MAMP/htdocs' ) || ( $docroot != '/home/dragarch/public_html' )" --> <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> <!--#else --> <link rel="stylesheet" type="text/css" href="style.css"> <link rel="shortcut icon" type="image/svg+xml" href="favicon.svg" /> <!--#endif --> And here's the output for the file index.shtml: <title>Home &ndash; dragarch</title> /Applications/MAMP/htdocs <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> Both style.css and favicon.svg are in the document root with index.shtml, so the if directive should fail and default to the output of the else directive. As you can see, while the document root (which is currently the MAMP htdocs folder on my own notebook) is correct according to the output of the echo directive, the comparison in the if-else directive fails to compare the strings properly. I'm using this page for my documentation: http://httpd.apache.org/docs/2.2/mod/mod_include.html I'm at a complete loss as to why this is the case, and need a bit of help here. EDIT: I should note that dragarch is a hostname that I configured in /etc/hosts to point to 127.0.0.1 so I could test the site without having to use localhost. It has no real effect on the functionality of anything, other than to just act as a prettier hostname to use.

    Read the article

  • 1 to 1 Comparison and Ranking System

    - by David
    I'm looking to create a comparison and ranking system which allows users to view 2 items, click on the one that they feel is the better one and then get presented with 2 more random items and continue to do this until they decide to stop. In the background, I want the system to use these wins and loses to rank each item in an overall ranking table so I can then see what is #1 and what isn't. I haven't got a clue where to begin with the formula, but I image I need to log wins and loses. Any help/direction appreciated!

    Read the article

  • Comparison between a value with static type Array and a possibly unrelated type Class

    - by Kaoru
    I got this error: Comparison between a value with static type Array and a possibly unrelated type Class. After i modify the class to many classes (before that, everything is on 1 class (all of the functions)), but after i move everything to many classes (all the functions is not on 1 class), that error appear. How to solve this? I am using AS3 and as3isolib Library. Here is the code after i modify the function: if (Constant.dude.y < Constant._numY) { if (Constant.dude.sprites != marioBackClass) { Constant.dude.sprites = [marioBackClass]; Constant.dudeDir = "Up"; } } Here is the code before i change the function to many classes: if (dude.y < ._numY) { if (dude.sprites.toString() != marioBackClass.toString()) { dude.sprites = [marioBackClass]; dudeDir = "Up"; } }

    Read the article

  • Bitwise operators in DX9 ps_2_0 shader

    - by lapin
    I've got the following code in a shader: // v & y are both floats nPixel = v; nPixel << 8; nPixel |= y; and this gives me the following error in compilation: shader.fx(80,10): error X3535: Bitwise operations not supported on legacy targets. shader.fx(92,18): ID3DXEffectCompiler::CompileEffect: There was an error compiling expression ID3DXEffectCompiler: Compilation failed The error is on the following line: nPixel |= y; What am I doing wrong here?

    Read the article

  • LINQ and conversion operators

    - by vik20000in
    LINQ has a habit of returning things as IEnumerable. But we have all been working with so many other format of lists like array ilist, dictionary etc that most of the time after having the result set we want to get them converted to one of our known format. For this reason LINQ has come up with helper method which can convert the result set in the desired format. Below is an example var sortedDoubles =         from d in doubles         orderby d descending         select d;     var doublesArray = sortedDoubles.ToArray(); This way we can also transfer the data to IList and Dictionary objects. Let’s say we have an array of Objects. The array contains all different types of data like double, int, null, string etc and we want only one type of data back then also we can use the helper function ofType. Below is an example     object[] numbers = { null, 1.0, "two", 3, "four", 5, "six", 7.0 };     var doubles = numbers.OfType<double>(); Vikram

    Read the article

  • Compound assignment operators in Python's Numpy library

    - by Leonard
    The "vectorizing" of fancy indexing by Python's numpy library sometimes gives unexpected results. For example: import numpy a = numpy.zeros((1000,4), dtype='uint32') b = numpy.zeros((1000,4), dtype='uint32') i = numpy.random.random_integers(0,999,1000) j = numpy.random.random_integers(0,3,1000) a[i,j] += 1 for k in xrange(1000): b[i[k],j[k]] += 1 Gives different results in the arrays 'a' and 'b' (i.e. the appearance of tuple (i,j) appears as 1 in 'a' regardless of repeats, whereas repeats are counted in 'b'). This is easily verified as follows: numpy.sum(a) 883 numpy.sum(b) 1000 It is also notable that the fancy indexing version is almost two orders of magnitude faster than the for loop. My question is: "Is there an efficient way for numpy to compute the repeat counts as implemented using the for loop in the provided example?"

    Read the article

  • Learning about C Bitshift Operators

    - by Chris Hammond
    So I was doing some reading tonight on my Nerdkit , I had planned to actually do some playing around with it, but decided just to read a bit. I’ve never coded in C, I did C++ in College (not very well) and do most of my development in C# these days (when I’m doing code, mostly for fun). While all similar, there are a few differences, so doing things in C is a learning experience. There was some practice questions for AND and OR using Binary. Here are some examples. When comparing binary with AND ...(read more)

    Read the article

  • Parent-child hierarchies and unary operators in PowerPivot

    - by Marco Russo (SQLBI)
    Alberto wrote an excellent post describing how to implement the Unary Operator feature (which is present in Analysis Services) in PowerPivot (there was a previous post about parent-child hierarchies, too). I have to say that the solution is not so easy to implement as in Analysis Services, but it just works and, from a practical point of view, it is not so difficult to implement if you understand how it works and accept its limitations (only sum and subtractions are supported). I think that many...(read more)

    Read the article

  • Is it possible to pass arithmetic operators to a method in java?

    - by drJames
    Right now I'm going to have to write a method that looks like this: public String Calculate(String Operator, Double Operand1, Double Operand2) { if (Operator.equals("+")) { return String.valueOf(Operand1 + Operand2); } else if (Operator.equals("-")) { return String.valueOf(Operand1 - Operand2); } else if (Operator.equals("*")) { return String.valueOf(Operand1 * Operand2); } else { return "error..."; } } It would be nice if I could write the code more like this: public String Calculate(String Operator, Double Operand1, Double Operand2) { return String.valueOf(Operand1 Operator Operand2); } So Operator would replace the Arithmetic Operators (+, -, *, /...) Does anyone know if something like this is possible in java?

    Read the article

  • Is it possible to pass arithmatic operators to a method in java?

    - by user349611
    Right now I'm going to have to write a method that looks like this: public String Calculate(String Operator, Double Operand1, Double Operand2) { if (Operator.equals("+")) { return String.valueOf(Operand1 + Operand2); } else if (Operator.equals("-")) { return String.valueOf(Operand1 - Operand2); } else if (Operator.equals("*")) { return String.valueOf(Operand1 * Operand2); } else { return "error..."; } } It would be nice if I could write the code more like this: public String Calculate(String Operator, Double Operand1, Double Operand2) { return String.valueOf(Operand1 Operator Operand2); } So Operator would replace the Arithmetic Operators (+, -, *, /...) Does anyone know if something like this is possible in java?

    Read the article

  • Two '==' equality operators in same 'if' condition are not working as intended.

    - by Manav MN
    I am trying to establish equality of three equal variables, but the following code is not printing the obvious true answer which it should print. Can someone explain, how the compiler is parsing the given if condition internally? #include<stdio.h> int main() { int i = 123, j = 123, k = 123; if ( i == j == k) printf("Equal\n"); else printf("NOT Equal\n"); return 0; } Output: manav@workstation:~$ gcc -Wall -pedantic calc.c calc.c: In function ‘main’: calc.c:5: warning: suggest parentheses around comparison in operand of ‘==’ manav@workstation:~$ ./a.out NOT Equal manav@workstation:~$ EDIT: Going by the answers given below, is the following statement okay to check above equality? if ( (i==j) == (j==k))

    Read the article

  • How does the ? make a quantifier lazy in regex

    - by Uriel Katz
    I've been looking into regex lately and figured that the ? operator makes the *,+, or ? lazy. My question is how does it do that? Is it that *? for example is a special operator, or does the ? have an effect on the *? In other words, does regex recognize *? as one operator in itself, or does regex recognize *? as the two separate operators * and ?? If it is the case that *? is being recognized as two separate operators, how does the ? affect the * to make it lazy. If ? means that the * is optional, shouldn't this mean that the * doesn't have to exists at all. If so, then in a statement .*? wouldn't regex just match separate letters and the whole string instead of the shorter string? Please explain, I'm desperate to understand.

    Read the article

  • Comparison of Extreme Programming (XP) to Traditional Programming Methodologies

    The comparison of extreme programming (XP) to traditional programming methodologies can find similarities between the historic biblical battle between David and Goliath. Goliath of Gath is a Philistine warrior renowned for his size, strength and battle tested skills. Much like Goliath, traditional methodologies are known to be cumbersome due to large amounts of documentation, and time consuming do to the time needed to gather all the information. However, traditional methodologies have been widely accepted by the software development community for years because of its attention to detail regarding project development and maintenance. David is a male Israelite teenager, who was small, fearless, and untrained in any type of formal combat. In a similar fashion, extreme programming focuses more on code over documentation so that time is spent on developing the project and not on cumbersome documentation of a project. Typically, project managers and developers are fearless when they start this type of project because they usually start with little to no documentation, and they expect to be given changes to be implemented at the start of every new project iteration. Because of the lack of need or desire for documentation in extreme programming projects they appear to act as if there is no formal process involved in developing an extreme programming project.  This is a misnomer, because of the consistent development iterations and interaction with clients and users the quickly takes form because each iteration allows the project to be refined as the customer needs and desires change. Ravikant Agarwal and David Umphress documented a new approach to extreme programming called personal extreme programming (PXP) at the ACM Southeast Regional Conference in 2008. PXP is the application of extreme programming core concepts in a single developer team environment.  PXP focuses on how to adjust the main concepts and practices of extreme programming that is typically centered in a group environment and how they can be altered to be beneficial for a single developer environment. Suzanne Smith and Sara Stoecklin are both advocates of extreme programming according to the Journal of Computing Sciences in Colleges and in fact they feel that it should receive more attention in introductory programming classes to allow students to better understand the software development process. Reasons why extreme programming is a good thing: Developers get to do more of what they love, Develop. Traditional software development methodologies tend to  add additional demands on a project by requiring all requirements and project specifications to be fully defined prior to the start of the implementation phase of a project. A standard 40 hour work week. With limiting the work week to only 40 hours prevents developers from getting burned out on projects.

    Read the article

  • Execute process conditionally in Windows PowerShell (e.g. the && and || operators in Bash)

    - by Dustin
    I'm wondering if anybody knows of a way to conditionally execute a program depending on the exit success/failure of the previous program. Is there any way for me to execute a program2 immediately after program1 if program1 exits successfully without testing the LASTEXITCODE variable? I tried the -band and -and operators to no avail, though I had a feeling they wouldn't work anyway, and the best substitute is a combination of a semicolon and an if statement. I mean, when it comes to building a package somewhat automatically from source on Linux, the && operator can't be beaten: # Configure a package, compile it and install it ./configure && make && sudo make install PowerShell would require me to do the following, assuming I could actually use the same build system in PowerShell: # Configure a package, compile it and install it .\configure ; if ($LASTEXITCODE -eq 0) { make ; if ($LASTEXITCODE -eq 0) { sudo make install } } Sure, I could use multiple lines, save it in a file and execute the script, but the idea is for it to be concise (save keystrokes). Perhaps it's just a difference between PowerShell and Bash (and even the built-in Windows command prompt which supports the && operator) I'll need to adjust to, but if there's a cleaner way to do it, I'd love to know.

    Read the article

  • Comparison of cloud hosting providers

    - by Abel
    Is there a place where we can compare* the many new arising cloud hosting providers? From reading into each of them, they seem very different and range from just hosting applications (google) to a semi-full enterprise web serving framework (rackspace). Comparing "by hand" takes a lot of time. All have limitations and different prizing, but which are those and how do they compare? I'm looking for an unbiased comparison site, rather then a discussion on "which is the best". * I don't mean a hosting provider comparison site, of which there are many. The properties of cloud hosting providers are remarkably different and don't compare well on classical hosting provider comparison charts.

    Read the article

  • How can Swift be so much faster than Objective-C in these comparisons?

    - by Yellow
    Apple launched its new programming language Swift at WWDC14. In the presentation, they made some performance comparisons between Objective-C and Python. The following is a picture of one of their slides, of a comparison of those three languages performing some complex object sort: There was an even more incredible graph about a performance comparison using the RC4 encryption algorithm. Obviously this is a marketing talk, and they didn't go into detail on how this was implemented in each. I leaves me wondering though: How can a new programming language be so much faster? Are the Objective-C results caused by a bad compiler or is there something less efficient in Objective-C than Swift? How would you explain a 40% performance increase? I understand that garbage collection/automated reference control might produce some additional overhead, but this much?

    Read the article

  • How can Swift be so much faster than Objective-C?

    - by Yellow
    Apple launched its new programming language Swift today. In the presentation, they made some performance comparisons between Objective-C and Python. The following is a picture of one of their slides, of a comparison of those three languages performing some complex object sort: There was an even more incredible graph about a performance comparison working on some encryption algorithm. Obviously this is a marketing talk, and they didn't go into detail on how this was implemented in each. I leaves me wondering though: how can a new programming language be so much faster? In this example, surely you just have a bad Objective-C compiler or you're doing something in a less efficient way? How else would you explain a 40% performance increase? I understand that garbage collection/automated reference control might produce some additional overhead, but this much?

    Read the article

  • Comparison between operational systems

    - by Gustavo Bandeira
    Some years ago, I've heard people saying that the OSX and Linux were better than Windows, I also remember of reading something that said the Solaris operating system didn't fragment their files and that the Linux file system was almost in the same step but none of these claims seemed to have basis or references. I got two questions: When comparing operating systems, what are the main points for comparison? How's the comparison between the main operating systems today?

    Read the article

  • How to detect if 2 news articles have the same topic? (Python language-comparison)

    - by resopollution
    I'm looking for ideas on recommended approach. I'm trying to scrape some headlines and body text from articles for a few specific sites, similar to what Google does with Google News. The problem is across different sites, they may have articles on the same exact subject, worded slightly differently. Can anyone point to me what I need to know in order to write a comparison algorithm to auto-detect similar articles? Thanks very much in advance. I use Python.

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • Postfix and right-associative operators in LR(0) parsers

    - by Ian
    Is it possible to construct an LR(0) parser that could parse a language with both prefix and postfix operators? For example, if I had a grammar with the + (addition) and ! (factorial) operators with the usual precedence then 1+3! should be 1 + 3! = 1 + 6 = 7, but surely if the parser were LR(0) then when it had 1+3 on the stack it would reduce rather than shift? Also, do right associative operators pose a problem? For example, 2^3^4 should be 2^(3^4) but again, when the parser have 2^3 on the stack how would it know to reduce or shift? If this isn't possible is there still a way to use an LR(0) parser, possibly by converting the input into Polish or Reverse Polish notation or adding brackets in the appropriate places? Would this be done before, during or after the lexing stage?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >