Search Results

Search found 36756 results on 1471 pages for 'mysql real query'.

Page 566/1471 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • Automatically make or update a copy real-time on another hard drive volume whenever files are saved to a particular folder

    - by mrblint
    Whenever I save or update a file to a particular designated folder on my C:\drive I would like to make or update a copy on my network-attached storage device, ideally saving the copy to the NAS as a version rather than overwriting a copy there, if possible. I have Windows 7 x64 Ultimate. Is there any feature built-in that can accomplish this? It has to be a real copy, not merely a pointer. I'm trying to achieve some redundancy for especially critical documents (in a variety of formats) that change frequently throughout the day. P.S. I am looking for folder-level granularity; I wouldn't want this to happen for every file on the C: volume.

    Read the article

  • Is there anyway that I can set the 'real' memory usage value while running my java code?

    - by vira
    I'm running a code on a server to generate a 10,000x10,000 matrix and save each value into a table (MySQL). I was informed by the administrator that I can use up to 32g of the physical memory of our server but have no idea how to do it. I googling around and so far only found information about setting the virtual memory using -Xmx. I tried it anyway and using top command I got this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3981 gv 35 15 32.4g 304m 10m S 1 0.5 9:54.84 java So, it shows that the -Xmx set the VIRT and not the RES value. Is there anyway that I can set the RES value into 32g?

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • JPA 2 and Hibernate 3.5.1 MEMBER OF query doesnt work.

    - by Ed_Zero
    I'm trying the following JPQL and it fails misserably: Query query = em.createQuery("SELECT u FROM User u WHERE 'admin' MEMBER OF u.roles"); List users = query.query.getResultList(); I get the following exception: ERROR [main] PARSER.error(454) | <AST>:0:0: unexpected end of subtree java.lang.IllegalArgumentException: org.hibernate.hql.ast.QuerySyntaxException: unexpected end of subtree [SELECT u FROM com.online.data.User u WHERE 'admin' MEMBER OF u.roles] ERROR [main] PARSER.error(454) | <AST>:0:0: expecting "from", found '<ASTNULL>' ... ... Caused by: org.hibernate.hql.ast.QuerySyntaxException: unexpected end of subtree [SELECT u FROM com.online.data.User u WHERE 'admin' MEMBER OF u.roles] I have Spring 3.0.1.RELEASE, Hibernate 3.5.1-Final and maven to glue dependencies. User class: @Entity public class User { @Id @Column(name = "USER_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private long id; @Column(unique = true, nullable = false) private String username; private boolean enabled; @ElementCollection private Set<String> roles = new HashSet<String>(); ... } Spring configuration: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:p="http://www.springframework.org/schema/p" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/tx/spring-context-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"> <!-- Reading annotation driven configuration --> <tx:annotation-driven transaction-manager="transactionManager" /> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor" /> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${jdbc.driverClassName}" /> <property name="url" value="${jdbc.url}" /> <property name="username" value="${jdbc.username}" /> <property name="password" value="${jdbc.password}" /> <property name="maxActive" value="100" /> <property name="maxWait" value="1000" /> <property name="poolPreparedStatements" value="true" /> <property name="defaultAutoCommit" value="true" /> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> <property name="dataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="databasePlatform" value="${hibernate.dialect}" /> </bean> </property> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" /> </property> <property name="jpaProperties"> <props> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.current_session_context_class">thread</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">false</prop> <prop key="hibernate.show_comments">true</prop> </props> </property> <property name="persistenceUnitName" value="punit" /> </bean> <bean id="JpaTemplate" class="org.springframework.orm.jpa.JpaTemplate"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> </beans> Persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence"> <persistence-unit name="punit" transaction-type="RESOURCE_LOCAL" /> </persistence> pom.xml maven dependencies. <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>${hibernate.version}</version> <type>pom</type> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>${hibernate.version}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-annotations</artifactId> <version>${hibernate.version}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>${hibernate.version}</version> </dependency> <dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.2.2</version> <type>jar</type> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-taglibs</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-acl</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>javax.annotation</groupId> <artifactId>jsr250-api</artifactId> <version>1.0</version> </dependency> <properties> <!-- Application settings --> <spring.version>3.0.1.RELEASE</spring.version> <hibernate.version>3.5.1-Final</hibernate.version> Im running a unit test to check the configuration and I am able to run other JPQL queries the only ones that I am unable to run are the IS EMPTY, MEMBER OF conditions. The complete unit test is as follows: TestIntegration @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "/spring/dataLayer.xml"}) @Transactional @TransactionConfiguration public class TestUserDaoImplIntegration { @PersistenceContext private EntityManager em; @Test public void shouldTest() throws Exception { try { //WORKS Query query = em.createQuery("SELECT u FROM User u WHERE 'admin' in elements(u.roles)"); List users = query.query.getResultList(); //DOES NOT WORK } catch (Exception e) { e.printStackTrace(); throw e; } try { Query query = em.createQuery("SELECT u FROM User u WHERE 'admin' MEMBER OF u.roles"); List users = query.query.getResultList(); } catch (Exception e) { e.printStackTrace(); throw e; } } }

    Read the article

  • How to connect a new query script with SSMS add-in?

    - by squillman
    I'm trying to create a SSMS add-in. One of the things I want to do is to create a new query window and programatically connect it to a server instance (in the context of a SQL Login). I can create the new query script window just fine but I can't find how to connect it without first manually connecting to something else (like the Object Explorer). So in other words, if I connect Obect Explorer to a SQL instance manually and then execute the method of my add-in that creates the query window I can connect it using this code: ServiceCache.ScriptFactory.CreateNewBlankScript( Editors.ScriptType.Sql, ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo, null); But I don't want to rely on CurrentlyActiveWndConnectionInfo.UIConnectionInfo for the connection. I want to set a SQL Login username and password programatically. Does anyone have any ideas? EDIT: I've managed to get the query window connected by setting the last parameter to an instance of System.Data.SqlClient.SqlConnection. However, the connection uses the context of the last login that was connected instead of what I'm trying to set programatically. That is, the user it connects as is the one selected in the Connection Dialog that you get when you click the New Query button and don't have an Object Explorer connected. EDIT2: I'm writing (or hoping to write) an add-in to automatically send a SQL statement and the execution results to our case-tracking system when run against our production servers. One thought I had was to remove write permissions and assign logins through this add-in which will also force the user to enter a case # canceling the statement if it's not there. Another thought I've just had is to inspect the server name in ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo and compare it to our list of production servers. If it matches and there's no case # then cancel the query.

    Read the article

  • What is the point in using a "real" database modeling tool?

    - by cdeszaq
    We currently have a 10 year old nasty, spaghetti-code-style SQL Server database that we are soon looking to pretty much re-write from scratch as part of a re-write to a large web application. (The existing application will serve as the functional requirements for the next incarnation of the app). Some have suggested we use Visio to do all the diagramming and to generate the DDL, but others have suggested we use a dedicated database design tool, rather than a diagramming tool that is able to export DDL. Is there any benefit to using "real" DB design tools, such as ModelRight, over general tools like Visio? If so, what are those specific benefits? Edit: In a nutshell, what can real/dedicated tools do that something like Visio can't, and how much do these capabilities matter (from a best-practices standpoint, for example)

    Read the article

  • What is a practical, real world example of the Linked List?

    - by JStims
    I understand the definition of a Linked List, but how can it be represented and related to a common concept or item? For example, inheritance in OOP can be related to automobiles. All (most) automobiles in real life are the essentially same thing; an automobile has an Engine, you can start() it, you can make the car go(), stop() and so on. An automobile would typically have a maximum passenger capacity but it would differ between a Bus and a SportsCar, which are both automobiles. Is there some real life, intuitive example of the plain ole' singly Linked List like we have with inheritance? The typical textbook Linked List example shows a node with an integer and a pointer to the next, and it just doesn't seem very useful. Your input is appreciated.

    Read the article

  • Where can I get free real-time stock data?

    - by Jared
    Does anyone know of a way to obtain free real-time stock data or near real-time stock data? I'd like to do this since I'm interested in the financial market, not for use in investment applications which is why I'm looking for something free. I've tried the Perl module Finance::YahooQuote but some of the fields such as last trade time appear to be broken. I've looked at the historical data but it doesn't fit my needs since I'd like to monitor the movements of the markets and stocks during the trading day not just open and close. I've also looked at http://www.opentick.com but they aren’t accepting new accounts and I can't find a timeline for when their network upgrade will be complete.

    Read the article

  • How do you compare using .NET types in an NHibernate ICriteria query for an ICompositeUserType?

    - by gabe
    I have an answered StackOverflow question about how to combine to legacy CHAR database date and time fields into one .NET DateTime property in my POCO here (thanks much Berryl!). Now i am trying to get a custom ICritera query to work against that very DateTime property to no avail. here's my query: ICriteria criteria = Session.CreateCriteria<InputFileLog>() .Add(Expression.Gt(MembersOf<InputFileLog>.GetName(x => x.FileCreationDateTime), DateTime.Now.AddDays(-14))) .AddOrder(Order.Desc(Projections.Id())) .CreateCriteria(typeof(InputFile).Name) .Add(Expression.Eq(MembersOf<InputFile>.GetName(x => x.Id), inputFileName)); IList<InputFileLog> list = criteria.List<InputFileLog>(); And here's the query it's generating: SELECT this_.input_file_token as input1_9_2_, this_.file_creation_date as file2_9_2_, this_.file_creation_time as file3_9_2_, this_.approval_ind as approval4_9_2_, this_.file_id as file5_9_2_, this_.process_name as process6_9_2_, this_.process_status as process7_9_2_, this_.input_file_name as input8_9_2_, gonogo3_.input_file_token as input1_6_0_, gonogo3_.go_nogo_ind as go2_6_0_, inputfile1_.input_file_name as input1_3_1_, inputfile1_.src_code as src2_3_1_, inputfile1_.process_cat_code as process3_3_1_ FROM input_file_log this_ left outer join go_nogo gonogo3_ on this_.input_file_token=gonogo3_.input_file_token inner join input_file inputfile1_ on this_.input_file_name=inputfile1_.input_file_name WHERE this_.file_creation_date > :p0 and this_.file_creation_time > :p1 and inputfile1_.input_file_name = :p2 ORDER BY this_.input_file_token desc; :p0 = '20100401', :p1 = '15:15:27', :p2 = 'LMCONV_JR' The query is exactly what i would expect, actually, except it doesn't actually give me what i want (all the rows in the last 2 weeks) because in the DB it's doing a greater than comparison using CHARs instead of DATEs. I have no idea how to get the query to convert the CHAR values into a DATE in the query without doing a CreateSQLQuery(), which I would like to avoid. Anyone know how to do this?

    Read the article

  • Is there any real world reason to use throw ex?

    - by Michael Stum
    In C#, throw ex is almost always wrong, as it resets the stack trace. I just wonder, is there any real world use for this? The only reason I can think of is to hide internals of your closed library, but that's a really weak reason. Apart from that, I've never encountered in the real world. Edit: I do mean throw ex, as in throwing the exact same exception that was caught but with an empty stacktrace, as in doing it exactly wrong. I know that throw ex has to exist as a language construct to allow throwing a different exception (throw new DifferentException("ex as innerException", ex)) and was just wondering if there is ever a situration where a throw ex is not wrong.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Fingerprint of PEM ssh key

    - by Unknown
    I have a PEM file which I add to a running ssh-agent: $ file query.pem query.pem: PEM RSA private key $ ssh-add ./query.pem Identity added: ./query.pem (./query.pem) $ ssh-add -l | grep query 2048 ef:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX ./query.pem (RSA) My question is how I can get the key fingerprint I see in ssh-agent directly from the file. I know ssh-keygen -l -f some_key works for "normal" ssh keys, but not for PEM files. If I try ssh-keygen on the .pem file, I get: $ ssh-keygen -l -f ./query.pem key_read: uudecode PRIVATE KEY----- failed key_read: uudecode PRIVATE KEY----- failed ./query.pem is not a public key file. This key starts with: -----BEGIN RSA PRIVATE KEY----- MIIEp.... etc. as opposed to a "regular" private key, which looks like: -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,E15F2.... etc.

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >