Search Results

Search found 17583 results on 704 pages for 'query analyzer'.

Page 300/704 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Mysql not starting - innodb not found

    - by Rob Guderian
    I have a fresh install of ubuntu 12.04 server edition and mysql server is not starting properly. I did a simple apt-get install apt-get install mysql-server But, it's failing with this error message root@test:~# mysqld 120618 20:57:32 [Warning] The syntax '--log-slow-queries' is deprecated and will be removed in a future release. Please use '--slow-query-log'/'--slow-query-log-file' instead. 120618 20:57:32 [Note] Plugin 'FEDERATED' is disabled. 120618 20:57:32 InnoDB: The InnoDB memory heap is disabled 120618 20:57:32 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120618 20:57:32 InnoDB: Compressed tables use zlib 1.2.3.4 120618 20:57:32 InnoDB: Unrecognized value fdatasync for innodb_flush_method 120618 20:57:32 [ERROR] Plugin 'InnoDB' init function returned error. 120618 20:57:32 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120618 20:57:32 [ERROR] Unknown/unsupported storage engine: InnoDB 120618 20:57:32 [ERROR] Aborting I can start the server with the "--skip-innodb --default-storage-engine=myisam" flags, but would like to use innodb. Does anyone know what the issue here is?

    Read the article

  • Remove a bad/erroneous WebPart from a SharePoint page

    - by KunaalKapoor
    If you've added a poorly written webpart to your 'default.aspx' page, the consequence of this action will be that you won't be able to load the page anymore... Don't be sad, there is still a way to remove the webpart from the page :) (Yes, even removing the webpart from the webpart gallery would not solve this issue).Steps to fix this:1. Append the following query string to your URL: ?Contents=1.Once you've added Contents=1 as a query string to the webpart page's URL it will display the Webpart Maintenance Page. Example: http://mysharepointserver/default.aspx?contents=12. On that page you can now see the webparts added to the page, delete the problematic webpart.Now try reloading the default.aspx page... Tadaaa!!! you can view your page again :)3. Leave a thank you note @ comments section :)

    Read the article

  • BizTalk 2009 - Messages: Last 50 suspended

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received  and the Last 100 Messages Sent queries so what about suspended messages? In BizTalk 2004 we had a query in HAT to return the last 100 suspended message instances.  Lets create a direct replacement in a BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last fifty messages that were suspended by BizTalk: Coming up Service instances - Last 100

    Read the article

  • When and How is an image cached for an ASPX with ContentType = image/jpeg ?

    - by Aamir Hasan
     In asp.net you can cache your page. You can vary the output cache by the followingThe query string in an initial request (HTTP GET).Control values passed on postback (HTTP POST values).The HTTP headers passed with a request.The major version number of the browser making the request.      A custom string in the page. In that case, you create custom code in the Global.asax file to specify the page's caching behavior.Link: http://msdn2.microsoft.com/en-us/library/xadzbzd6(VS.80).aspxyou can set the output caching for your GetImage.aspx, so that you dont have to requery the database every image request ,but you must use varybyParam , so that you have a cached version for every parameters arrangement:set the output cache for your page like this :At top of ASPX page: <%@ OutputCache Duration="600" VaryByParam="ID,Height,Width" %>VaryByParam  attribute allows you to vary the cached output depending on the query string.Adding this will make your images cached for 600 seconds, so that if the image request within this period ,the cahed version will be returned

    Read the article

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Using 'new' in a projection?

    - by davenewza
    I wish to project a collection from one type (Something) to another type (SomethingElse). Yes, this is a very open-eneded question, but which of the two options below do you prefer? Creating a new instance using new: var result = query.Select(something => new SomethingElse(something)); Using a factory: var result = query.Select(something => SomethingElse.FromSomething(something)); When I think of a projection, I generally think of it as a conversion. Using new gives me this idea that I'm creating new objects during a conversion, which doesn't feel right. Semantically, SomethingElse.FromSomething() most definitely fits better. Although, the second option does require addition code to setup a factory, which could become unnecessarily compulsive.

    Read the article

  • SharePoint 2010 - Drives are running out of free space.

    - by Sahil Malik
    SharePoint 2010 Training: more information You might have seen the following dreadful message - As this post on blogs.msdn.com details out, this is due to a health analyzer rule configured in SharePoint. While that blogpost does a great job explaining why this monitoring is necessary, how you can tweak it, it still becomes a nuisance on SharePoint virtual machines used for development. It also becomes a nuisance on production environments because SharePoint databases are set to auto grow. In other words, as the database is being used, it only grows, and grows, and GROWS! Seriously, how many of you have put in work to compact the database on a regular basis? Those of you who answered no, you’re sitting on  a time bomb. Shame on you!   Read full article ....

    Read the article

  • How important is index size when searching?

    - by Michael K
    My company has recently began using Apache Solr to search its data. As we learn how to use it we have gone down the path of indexing multiple fields to get the results we need. Most of these are either N-Grammed or Edge-N-Grammed. Gramming by nature takes up a lot of space, which takes more time to search. Space is cheap, but time is less so. Index time is not too important, since a delta-import (only get the changes since last index) is extremely quick and you only pay a penalty on the first import. What we've not been able to determine is what effect the index size has on query times. Obviously a larger index takes longer to search, but the time added by n-gramming a field is difficult to predict. How do you determine whether a field is worth gramming? Can you predict how much longer a query will take when you gram a field?

    Read the article

  • Writing a spell checker similar to "did you mean"

    - by user888734
    I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's "Did you mean?" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?

    Read the article

  • Get script for every action in SQL Server Management Studio

    I am always conscious to keep a record of all operations performed on my database servers. Operations through T-SQL in an SSMS query pane can easily be saved in query files. For table modifications through SSMS designer I have predefined setting to generate T-SQL scripts. However there are numerous database and server level tasks that I use the SSMS GUI and I would like to have a script of these changes for later reference. Examples of such actions through the SSMS GUI are backup/restore, changing compatibility level of a database, manipulating permissions, dealing with database or log files or creating/manipulating any login/user. I am looking for any way to generate T-SQL code for such actions, so that it may be kept for later reference

    Read the article

  • A Better Way To Extract Date From DateTime Columns [SQL Server]

    - by Gopinath
    Quite a long ago I wrote about a SQL Server programming tip on how to extract date part from a DATETIME column. The post discusses about using of T SQL function convert() to get date part. One of the readers of the post, tipped me about a better way of extracting date part and here is the SQL query he sent to us SELECT DateAdd(day, DateDiff(day, 0, getdate()), 0); In clean way this query trims off time part from the DATETIME value. I rate this solution better than the one I wrote long ago as this one does not depend on any string operations. According the commenter, this method is faster compared to the other. What do you say? Thanks Yamo This article titled,A Better Way To Extract Date From DateTime Columns [SQL Server], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • E-Business Suite Applications Technology Group (ATG) Advisor Webcast Program – June 2014

    - by LuciaC-Oracle
    Webcast: EBS Patch Wizard Overview Date: Thursday June 12, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST Topics covered will include: What is Oracle EBS Patch Wizard Tool? Benefits of the Patch Wizard utility Patch Wizard Interface Patch Impact Analysis Details & Registration : Doc ID 1672371.1Direct registration link. Webcast: EBS Proactive Analyzers Overview Date: Thursday June 26, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST Topics covered will include: What are Oracle Support Analyzers? How to identify the Analyzers available? Short Analyzers Overview Patch Impact Analysis How to use an Analyzer? Details & Registration : Doc ID 1672363.1Direct registration link.If you have any questions about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • WSS V3 and connections to it’s internal database

    - by ptahiliani
    Have you ever wanted to connect to the “Windows Internal” database that WSS V3 uses? While “Windows Internal Database” is Microsoft SQL Server 2005 in a limited edition (just like MSDE, WMSDE before it), the familiar access tools to the DB went missing, and connecting using standard ways doesn’t work either. It doesn’t work right out of the box. First, you need SQL Management Studio Express. Install and start it. Specify the following connection string: \\.\pipe\mssql$microsoft##ssee\sql\query Please note that, as implied by the connection string, this connection only works locally. If you are looking for the connection string than here it is: “Provider=Sqloledb;Data Source=\\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query;Database=SUSDB;Trusted_Connection=yes”

    Read the article

  • Cannot save all of the property settings for this Web Part.

    - by ybbest
    I would like to display all the items of custom content type of Animals in a SharePoint content query WebPart. After choosing the appropriate values in the web part editor and click on Save I got this error: Cannot save all of the property settings for this Web Part. There is a problem with one or more of the field values below. But when I examine all the values below , it does not flag any error information. I finally manage to locate the error flag after I expand the presentation section. I then delete the text in the Link textbox , now I can save the settings. However , I think the error message should have been more specific so that users can quickly locate the error. The worst part for this is that I did not even change anything in the presentation section, I merely configure the Source in the Query section. Well, I guess I am still new to SharePoint, I just have to get used to these generic error message ):

    Read the article

  • SSRS optional parameters settings

    - by Natasa Gavrilovic
    Recently I had to create couple SQL Server Reports (SSRS) with optional parameters built in. It took me a while to refresh memory how this can be done. It was very simple to create reports and processes behind, but connecting these two were are little bit challenging – stored procedure was tested and worked fine, but when the report was passing optional parameters it didn’t returned expected results. After tweaking SQL stored procedures and reports parameter options, the following approach turn to be the winning one. 1) Defining report parameters: From Menu bar select ‘View’ and ‘Report Data’ Newly open window should have ‘Parameters’ folder display Right click on this folder and select ‘Add new parameter...’                             Default values need to be added from a query                 A query values need to include ‘’ (empty string) – as highlighted                   2) SQL stored procedure should have CASE statements inside WHERE and it was the only way that a report was getting correct results back.

    Read the article

  • Oracle Solaris Preflight Applications Checker 11.2 now available

    - by CarylTakvorian-Oracle
    ISV Engineering is happy to announce the release of the latest version of our Solaris Preflight Checker tool supporting Solaris 11.2. which is now available for download. The Solaris Preflight Checker enables a developer to determine the Oracle Solaris 11.2 readiness of an application by analyzing a working application on Oracle Solaris 10. A successful check with this tool will be a strong indicator that an application will run unmodified on the latest Oracle Solaris 11.This release includes: Updated symbol database which will help migration from Solaris 10 to Solaris 11.2 Kernel binary and source scanners that now detects, usage of "data structures" changed between Solaris 10 and Solaris 11.2 An application analyzer, which looks for usage of specific Solaris features and recommends better ways of implementing the same on Solaris 11.2   e.g. suitability of high performance libraries shipped with Solaris, crypto offload for Java & C based applications,  etc. And bug fixes

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • Slow in the Application, Fast in SSMS? Understanding Performance Mysteries

    When I read various forums about SQL Server, I frequently see questions from deeply mystified posters. They have identified a slow query or stored procedure in their application. They cull the SQL batch from the application and run it in SQL Server Management Studio (SSMS) to analyse it, only to find that the response is instantaneous. At this point they are inclined to think that SQL Server is all about magic. A similar mystery is when a developer has extracted a query in his stored procedure to run it stand-alone only to find that it runs much faster – or much slower – than inside the procedure.

    Read the article

  • Classless tables possible with Datamapper?

    - by barerd
    I have an Item class with the following attributes: itemId,name,weight,volume,price,required_skills,required_items. Since the last two attributes are going to be multivalued, I removed them and create new schemes like: itemID,required_skill (itemID is foreign key, itemID and required_skill is primary key.) Now, I'm confused how to create/use this new table. Here are the options that came to my mind: 1) The relationship between Items and Required_skills is one-to-many, so I may create a RequiredSkill class, which belongs_to Item, which in turn has n RequiredSkills. Then I can do Item.get(1).requiredskills. This sounds most logical to me. 2) Since required_skills may well be thought of as constants (since they resemble rules), I may put them into a hash or gdbm database or another sql table and query from there, which I don't prefer. My question is: is there sth like a modelless table in datamapper, where datamapper is responsible from the creation and integrity of the table and allows me to query it in datamapper way, but does not require a class, like I may do it in sql?

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • Free eBook: SQL Server Execution Plans, Second Edition

    Every day, out in the various online forums devoted to SQL Server, and on Twitter, the same types of questions come up repeatedly: Why is this query running slowly? Why is SQL Server ignoring my index? Why does this query run quickly sometimes and slowly at others? My response is the same in each case: have you looked at the execution plan? "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • What is the best way to INSERT a large dataset into a MySQL database (or any database in general)

    - by Joe
    As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >