Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 12/66 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • two different virtual hosts, one page displayed

    - by majdal
    Hello! I have two different sites configured using virtual hosts (the content of the virtualhost files is posted below) i just copied the default file and edited a few lines... When i direct my browser to either of the two sites, only the content of the first of the two appears... Why? <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/hunterprojects.com/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/hunterprojects.com/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> AND THE SECOND ONE: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/dodolabarchive.ca/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/dodolabarchive.ca/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Index fragmentation and reorganizing database pages

    - by TiQ
    Say you have a database with heavy index fragmentation. Say this database also has a lot of free space due to frequent deletes in its data file. This free space is not contiguous. If I rebuild all indexes to remove fragmentation and then reorganize the database pages so allocated pages and free pages are contiguous, would this cause further fragmentation in my indexes? I guess the question can be posed as: if it matters, which should I do first, reorganize or rebuild?

    Read the article

  • mysql 5.0.23 vs 5.5 performance benefits and upgrade issues?

    - by WarDoGG
    I have been told that mysql 5.5 has a significant performance boost compared to 5.0 Our server handles a lot of data (around 30 million records processed per 5-10 seconds) and requires every drop of performance boost we can give. Will it be beneficial if we upgrade from 5.0.23 to mysql 5.5? Also, we have lots of database indexes setup on the tables and I've been told that sometimes the indexes become corrupt after a version upgrade and they have to be rebuilt. Is this true?

    Read the article

  • index help for a MySQL query using greater-than operator and ORDER BY

    - by Jaymon
    I have a table with at least a couple million rows and a schema of all integers that looks roughly like this: start stop first_user_id second_user_id The rows get pulled using the following queries: SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N AND second_user_id=N ORDER BY start ASC SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N ORDER BY start ASC I cannot figure out the best indexes to speed up these queries. The problem seems to be the ORDER BY because when I take that out the queries are fast. I've tried all different types of indexes using the standard index format: ALTER TABLE tbl_name ADD INDEX index_name (index_col_1,index_col_2,...) And none of them seem to speed up the queries. Does anyone have any idea what index would work? Also, should I be trying a different type of index? I can't guarantee the uniqueness of each row so I've avoided UNIQUE indexes. Any guidance/help would be appreciated. Thanks!

    Read the article

  • How to cycle through matrix blocks?

    - by luiss
    I have some matrix which I want to cycle through blocks, the matrix could be of many different sizes, but I can know the size, is there a way to fast cycle through blocks? i.e: to fast output the indexes of the blocks, suppose a matrix of 4*4 I should have: Block1: (0,0),(0,1)(1,0)(1,1) Block2: (0,2),(0,3)(1,2)(1,3) Block3: (2,0),(2,1)(3,0)(3,1) Block4: (2,2),(2,3)(3,2)(3,3) Where the indexes are (row,col). For blocks I mean a submatrix of size sqrt(matrixSize)* sqrt(matrixSize) where matrix is a matrix of matrixSize*matrixSize. For example a matrix of 4*4 has 4 blocks of 2*2, a 9*9 has 9 blocks of 3*3... I'm workdeing in C, but I think that the pseudocode is useful also, I only need the loop on the indexes... Thanks

    Read the article

  • Microsoft SQL Server 2008 - 99% fragmentation on non-clustered, non-unique index

    - by user550441
    I have a table with several indexes (defined below). One of the indexes (IX_external_guid_3) has 99% fragmentation regardless of rebuilding/reorganizing the index. Anyone have any idea as to what might cause this, or the best way to fix it? We are using Entity Framework 4.0 to query this, the EF queries on the other indexed fields about 10x faster on average then the external_guid_3 field, however an ADO.Net query is roughly the same speed on both (though 2x slower than the EF Query to indexed fields). Table id(PK, int, not null) guid(uniqueidentifier, null, rowguid) external_guid_1(uniqueidentifier, not null) external_guid_2(uniqueidentifier, null) state(varchar(32), null) value(varchar(max), null) infoset(XML(.), null) -- usually 2-4K created_time(datetime, null) updated_time(datetime, null) external_guid_3(uniqueidentifier, not null) FK_id(FK, int, null) locking_guid(uniqueidentifer, null) locked_time(datetime, null) external_guid_4(uniqueidentifier, null) corrected_time(datetime, null) is_add(bit, not null) score(int, null) row_version(timestamp, null) Indexes PK_table(Clustered) IX_created_time(Non-Unique, Non-Clustered) IX_external_guid_1(Non-Unique, Non-Clustered) IX_guid(Non-Unique, Non-Clustered) IX_external_guid_3(Non-Unique, Non-Clustered) IX_state(Non-Unique, Non-Clustered)

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • what is the best algorithm to use for this problem

    - by slim
    Equilibrium index of a sequence is an index such that the sum of elements at lower indexes is equal to the sum of elements at higher indexes. For example, in a sequence A: A[0]=-7 A[1]=1 A[2]=5 A[3]=2 A[4]=-4 A[5]=3 A[6]=0 3 is an equilibrium index, because: A[0]+A[1]+A[2]=A[4]+A[5]+A[6] 6 is also an equilibrium index, because: A[0]+A[1]+A[2]+A[3]+A[4]+A[5]=0 (sum of zero elements is zero) 7 is not an equilibrium index, because it is not a valid index of sequence A. If you still have doubts, this is a precise definition: the integer k is an equilibrium index of a sequence if and only if and . Assume the sum of zero elements is equal zero. Write a function int equi(int[] A); that given a sequence, returns its equilibrium index (any) or -1 if no equilibrium indexes exist. Assume that the sequence may be very long.

    Read the article

  • Apache redirect when users home directory is completely empty.

    - by Scott M
    I work for an ISP and I have a server with thousands of users 10MB of free storage. They get this free storage with every e-mail account they have with us. An example of a users storage address: http://users.example.com/~username/ One problem I can see is scanning the server for user names to see what accounts are available, basically getting a list of all our customers valid e-mail addresses. This would be very, very bad. So I'm wanting to redirect to our homepage if someone comes across a users account that is empty (I'd say 90% of them are completely empty). I also do not want to simply -Indexes them and use a custom 403 because the few customers that do use them, want +Indexes. I know I can always just tell the customers to put a htaccess file in their directory with Options +indexes if they want directory listing, but that's a last resort. How can I make it pretty much impossible to tell what accounts are on the server but not in use at all?

    Read the article

  • Bugzilla ./testserver.pl failing

    - by SomeKittens
    root@KittensTest:/var/www/Bugzilla/bugzilla-4.2.1# ./testserver.pl http://localhost/Bugzilla/bugzilla-4.2.1 TEST-OK Webserver is running under group id in $webservergroup. TEST-OK Got padlock picture. TEST-FAILED Webserver is fetching rather than executing CGI files. Check the AddHandler statement in your httpd.conf file. Well then. httpd.conf (from here[2.2.4.1.1]): <Directory /var/www/Bugzilla/bugzilla-4.2.1> AddHandler cgi-script .cgi .pl Options +Indexes +Includes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> What am I doing wrong? I'm pretty new to this (first Bugzilla install), so I'll appreciate explanation.

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 2

    - by shiju
    In my previous post, we have discussed on how to work with RavenDB document database in an ASP.NET MVC application. We have setup RavenDB for our ASP.NET MVC application and did basic CRUD operations against a simple domain entity. In this post, let’s discuss on domain entity with deep object graph and how to query against RavenDB documents using Indexes.Let's create two domain entities for our demo ASP.NET MVC appplication  public class Category {       public string Id { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public List<Expense> Expenses { get; set; }       public Category()     {         Expenses = new List<Expense>();     } }    public class Expense {       public string Id { get; set; }     public Category Category { get; set; }     public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }   }  We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category.Let's create  ASP.NET MVC view model  for Expense transaction public class ExpenseViewModel {     public string Id { get; set; }       public string CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]            public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]            public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } Let's create a contract type for Expense Repository  public interface IExpenseRepository {     Expense Load(string id);     IEnumerable<Expense> GetExpenseTransactions(DateTime startDate,DateTime endDate);     void Save(Expense expense,string categoryId);     void Delete(string id);  } Let's create a concrete type for Expense Repository for handling CRUD operations. public class ExpenseRepository : IExpenseRepository {   private IDocumentSession session; public ExpenseRepository() {         session = MvcApplication.CurrentSession; } public Expense Load(string id) {     return session.Load<Expense>(id); } public IEnumerable<Expense> GetExpenseTransactions(DateTime startDate, DateTime endDate) {             //Querying using the Index name "ExpenseTransactions"     //filtering with dates     var expenses = session.LuceneQuery<Expense>("ExpenseTransactions")         .WaitForNonStaleResults()         .Where(exp => exp.Date >= startDate && exp.Date <= endDate)         .ToArray();     return expenses; } public void Save(Expense expense,string categoryId) {     var category = session.Load<Category>(categoryId);     if (string.IsNullOrEmpty(expense.Id))     {         //new expense transaction         expense.Category = category;         session.Store(expense);     }     else     {         //modifying an existing expense transaction         var expenseToEdit = Load(expense.Id);         //Copy values to  expenseToEdit         ModelCopier.CopyModel(expense, expenseToEdit);         //set category object         expenseToEdit.Category = category;       }     //save changes     session.SaveChanges(); } public void Delete(string id) {     var expense = Load(id);     session.Delete<Expense>(expense);     session.SaveChanges(); }   }  Insert/Update Expense Transaction The Save method is used for both insert a new expense record and modifying an existing expense transaction. For a new expense transaction, we store the expense object with associated category into document session object and load the existing expense object and assign values to it for editing a existing record.  public void Save(Expense expense,string categoryId) {     var category = session.Load<Category>(categoryId);     if (string.IsNullOrEmpty(expense.Id))     {         //new expense transaction         expense.Category = category;         session.Store(expense);     }     else     {         //modifying an existing expense transaction         var expenseToEdit = Load(expense.Id);         //Copy values to  expenseToEdit         ModelCopier.CopyModel(expense, expenseToEdit);         //set category object         expenseToEdit.Category = category;       }     //save changes     session.SaveChanges(); } Querying Expense transactions   public IEnumerable<Expense> GetExpenseTransactions(DateTime startDate, DateTime endDate) {             //Querying using the Index name "ExpenseTransactions"     //filtering with dates     var expenses = session.LuceneQuery<Expense>("ExpenseTransactions")         .WaitForNonStaleResults()         .Where(exp => exp.Date >= startDate && exp.Date <= endDate)         .ToArray();     return expenses; }  The GetExpenseTransactions method returns expense transactions using a LINQ query expression with a Date comparison filter. The Lucene Query is using a index named "ExpenseTransactions" for getting the result set. In RavenDB, Indexes are LINQ queries stored in the RavenDB server and would be  executed on the background and will perform query against the JSON documents. Indexes will be working with a lucene query expression or a set operation. Indexes are composed using a Map and Reduce function. Check out Ayende's blog post on Map/Reduce We can create index using RavenDB web admin tool as well as programmitically using its Client API. The below shows the screen shot of creating index using web admin tool. We can also create Indexes using Raven Cleint API as shown in the following code documentStore.DatabaseCommands.PutIndex("ExpenseTransactions",     new IndexDefinition<Expense,Expense>() {     Map = Expenses => from exp in Expenses                     select new { exp.Date } });  In the Map function, we used a Linq expression as shown in the following from exp in docs.Expensesselect new { exp.Date };We have not used a Reduce function for the above index. A Reduce function is useful while performing aggregate functions based on the results from the Map function. Indexes can be use with set operations of RavenDB.SET OperationsUnlike other document databases, RavenDB supports set based operations that lets you to perform updates, deletes and inserts to the bulk_docs endpoint of RavenDB. For doing this, you just pass a query to a Index as shown in the following commandDELETE http://localhost:8080/bulk_docs/ExpenseTransactions?query=Date:20100531The above command using the Index named "ExpenseTransactions" for querying the documents with Date filter and  will delete all the documents that match the query criteria. The above command is equivalent of the following queryDELETE FROM ExpensesWHERE Date='2010-05-31' Controller & ActionsWe have created Expense Repository class for performing CRUD operations for the Expense transactions. Let's create a controller class for handling expense transactions.   public class ExpenseController : Controller { private ICategoryRepository categoyRepository; private IExpenseRepository expenseRepository; public ExpenseController(ICategoryRepository categoyRepository, IExpenseRepository expenseRepository) {     this.categoyRepository = categoyRepository;     this.expenseRepository = expenseRepository; } //Get Expense transactions based on dates public ActionResult Index(DateTime? StartDate, DateTime? EndDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!StartDate.HasValue)     {         StartDate = new DateTime(dtNow.Year, dtNow.Month, 1);         EndDate = StartDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of startdate's month, if endate is not passed     if (StartDate.HasValue && !EndDate.HasValue)     {         EndDate = (new DateTime(StartDate.Value.Year, StartDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }       var expenses = expenseRepository.GetExpenseTransactions(StartDate.Value, EndDate.Value);     if (Request.IsAjaxRequest())     {           return PartialView("ExpenseList", expenses);     }     ViewData.Add("StartDate", StartDate.Value.ToShortDateString());     ViewData.Add("EndDate", EndDate.Value.ToShortDateString());             return View(expenses);            }   // GET: /Expense/Edit public ActionResult Edit(string id) {       var expenseModel = new ExpenseViewModel();     var expense = expenseRepository.Load(id);     ModelCopier.CopyModel(expense, expenseModel);     var categories = categoyRepository.GetCategories();     expenseModel.Category = categories.ToSelectListItems(expense.Category.Id.ToString());                    return View("Save", expenseModel);          }   // // GET: /Expense/Create   public ActionResult Create() {     var expenseModel = new ExpenseViewModel();               var categories = categoyRepository.GetCategories();     expenseModel.Category = categories.ToSelectListItems("-1");     expenseModel.Date = DateTime.Today;     return View("Save", expenseModel); }   // // POST: /Expense/Save // Insert/Update Expense Tansaction [HttpPost] public ActionResult Save(ExpenseViewModel expenseViewModel) {     try     {         if (!ModelState.IsValid)         {               var categories = categoyRepository.GetCategories();                 expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);                               return View("Save", expenseViewModel);         }           var expense=new Expense();         ModelCopier.CopyModel(expenseViewModel, expense);          expenseRepository.Save(expense, expenseViewModel.CategoryId);                       return RedirectToAction("Index");     }     catch     {         return View();     } } //Delete a Expense Transaction public ActionResult Delete(string id) {     expenseRepository.Delete(id);     return RedirectToAction("Index");     }     }     Download the Source - You can download the source code from http://ravenmvc.codeplex.com

    Read the article

  • SQL Server Scripts I Use

    - by Bill Graziano
    When I get to a new client I usually find myself using the same set of scripts for maintenance and troubleshooting.  These are all drop in solutions for various maintenance issues. Reindexing.  I use Michelle Ufford’s (SQLFool) re-indexing script.  I like that it has a throttle and only re-indexes when needed.  She also has a variety of other interesting scripts on her blog too. Server Activity.  Adam Machanic is up to version 10 of sp_WhoIsActive.  It’s a great replacement for the sp_who* stored procedures and does so much more.  If a server is acting funny this is one of the first tools I use. Backups.  Tara Kizer has a great little T-SQL script for SQL Server backups.  Wait Stats.  Paul Randal has a great script to display wait stats.  The biggest benefit for me is that his script filters out at least three dozen wait stats that I just don’t care about (for example LAZYWRITER_SLEEP). Update Statistics.  I didn’t find anything I liked so I wrote a simple script to update stats myself.  The big need for me was that it had to run inside a time window and update the oldest statistics first.  Is there a better one? Diagnostic Queries.  Glenn Berry has a huge collection of DMV queries available.  He also just highlighted five of them including two I really like dealing with unused indexes and suggested indexes. Single Use Query Plans.  Kim Tripp has a script that counts the number of single-use query plans.  This should guide you in whether to enable the Optimize for Adhoc Workloads option in SQL Server 2008. Granting Permissions to Developers.  This is one of those scripts I didn’t even know I needed until I needed it.  Kendra Little wrote it to grant a login read-only permission to all the databases.  It also grants view server state and a few other handy permissions.   What else do you use?  What should I add to my list?

    Read the article

  • SQL SERVER – ColumnStore Index – Batch Mode vs Row Mode

    - by pinaldave
    What do you do when you are in a hurry and hear someone say things which you do not agree or is wrong? Well, let me tell you what I do or what I recently did. I was walking by and heard someone mentioning “Columnstore Index are really great as they are using Batch Mode which makes them seriously fast.” While I was passing by and I heard this statement my first reaction was I thought Columnstore Index can use both – Batch Mode and Row Mode. I stopped by even though I was in a hurry and asked the person if he meant that Columnstore indexes are seriously fast because they use Batch Mode all the time or Batch Mode is one of the reasons for Columnstore Index to be faster. He responded that Columnstore Indexes can run only in Batch Mode. However, I do not like to confront anybody without hearing their complete story. Honestly, I like to do information sharing and avoid confronting as much as possible. There are always ways to communicate the same positively. Well, this is what I did, I quickly pull up my earlier article on Columnstore Index and copied the script to SQL Server Management Studio. I created two versions of the script. 1) Very Large Table 2) Reasonably Small Table. I a query which uses columnstore index on both of the versions. I found very interesting result of the my tests. I saved my tests and sent it to the person who mentioned about that Columnstore Indexes are using Batch Mode only. He immediately acknowledged that indeed he was incorrect in saying that Columnstore Index uses only Batch Mode. What really caught my attention is that he also thanked me for sending him detail email instead of just having argument where he and I both were standing in the corridor and neither have no way to prove any theory. Here is the screenshots of the both the scenarios. 1) Columnstore Index using Batch Mode 2) Columnstore Index using Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput.  Batch mode processing spreads metadata access costs and overhead over all the rows in a batch.  Batch mode processing operates on compressed data when possible leading superior performance. Here is one last point – Columnstore Index can use Batch Mode or Row Mode but Batch Mode processing is only available in Columnstore Index. I hope this statement truly sums up the whole concept. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Guest Post by Sandip Pani – SQL Server Statistics Name and Index Creation

    - by pinaldave
    Sometimes something very small or a common error which we observe in daily life teaches us new things. SQL Server Expert Sandip Pani (winner of Joes 2 Pros Contests) has come across similar experience. Sandip has written a guest post on an error he faced in his daily work. Sandip is working for QSI Healthcare as an Associate Technical Specialist and have more than 5 years of total experience. He blogs at SQLcommitted.com and contribute in various forums. His social media hands are LinkedIn, Facebook and Twitter. Once I faced following error when I was working on performance tuning project and attempt to create an Index. Mug 1913, Level 16, State 1, Line 1 The operation failed because an index or statistics with name ‘Ix_Table1_1′ already exists on table ‘Table1′. The immediate reaction to the error was that I might have created that index earlier and when I researched it further I found the same as the index was indeed created two times. This totally makes sense. This can happen due to many reasons for example if the user is careless and executes the same code two times as well, when he attempts to create index without checking if there was index already on the object. However when I paid attention to the details of the error, I realize that error message also talks about statistics along with the index. I got curious if the same would happen if I attempt to create indexes with the same name as statistics already created. There are a few other questions also prompted in my mind. I decided to do a small demonstration of the subject and build following demonstration script. The goal of my experiment is to find out the relation between statistics and the index. Statistics is one of the important input parameter for the optimizer during query optimization process. If the query is nontrivial then only optimizer uses statistics to perform a cost based optimization to select a plan. For accuracy and further learning I suggest to read MSDN. Now let’s find out the relationship between index and statistics. We will do the experiment in two parts. i) Creating Index ii) Creating Statistics We will be using the following T-SQL script for our example. IF (OBJECT_ID('Table1') IS NOT NULL) DROP TABLE Table1 GO CREATE TABLE Table1 (Col1 INT NOT NULL, Col2 VARCHAR(20) NOT NULL) GO We will be using following two queries to check if there are any index or statistics on our sample table Table1. -- Details of Index SELECT OBJECT_NAME(OBJECT_ID) AS TableName, Name AS IndexName, type_desc FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'table1' GO -- Details of Statistics SELECT OBJECT_NAME(OBJECT_ID) TableName, Name AS StatisticsName FROM sys.stats WHERE OBJECT_NAME(OBJECT_ID) = 'table1' GO When I ran above two scripts on the table right after it was created it did not give us any result which was expected. Now let us begin our test. 1) Create an index on the table Create following index on the table. CREATE NONCLUSTERED INDEX Ix_Table1_1 ON Table1(Col1) GO Now let us use above two scripts and see their results. We can see that when we created index at the same time it created statistics also with the same name. Before continuing to next set of demo – drop the table using following script and re-create the table using a script provided at the beginning of the table. DROP TABLE table1 GO 2) Create a statistic on the table Create following statistics on the table. CREATE STATISTICS Ix_table1_1 ON Table1 (Col1) GO Now let us use above two scripts and see their results. We can see that when we created statistics Index is not created. The behavior of this experiment is different from the earlier experiment. Clean up the table setup using the following script: DROP TABLE table1 GO Above two experiments teach us very valuable lesson that when we create indexes, SQL Server generates the index and statistics (with the same name as the index name) together. Now due to the reason if we have already had statistics with the same name but not the index, it is quite possible that we will face the error to create the index even though there is no index with the same name. A Quick Check To validate that if we create statistics first and then index after that with the same name, it will throw an error let us run following script in SSMS. Make sure to drop the table and clean up our sample table at the end of the experiment. -- Create sample table CREATE TABLE TestTable (Col1 INT NOT NULL, Col2 VARCHAR(20) NOT NULL) GO -- Create Statistics CREATE STATISTICS IX_TestTable_1 ON TestTable (Col1) GO -- Create Index CREATE NONCLUSTERED INDEX IX_TestTable_1 ON TestTable(Col1) GO -- Check error /*Msg 1913, Level 16, State 1, Line 2 The operation failed because an index or statistics with name 'IX_TestTable_1' already exists on table 'TestTable'. */ -- Clean up DROP TABLE TestTable GO While creating index it will throw the following error as statistics with the same name is already created. In simple words – when we create index the name of the index should be different from any of the existing indexes and statistics. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Configure clean URLs using Laravel using a rewrite rule to index.php

    - by yannis hristofakis
    Recently I've started learning Laravel , I have none experience with framework before. I'm encountering the following problem .I'm trying to configure the .htaccess file so I can have clean URLs but the only thing I get are 404 Not Found error pages. I have created a virtual host - you can see below the configuration file - and changed the .htaccesss file on the public directory. /etc/apache2/sites-available <VirtualHost *:80> ServerAdmin [email protected] ServerName laravel.lar DocumentRoot "/home/giannis/Desktop/laravel/public" <Directory "/home/giannis/Desktop/laravel/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccesss file: laravel/public # Apache configuration file # http://httpd.apache.org/docs/current/mod/quickreference.html # Note: ".htaccess" files are an overhead for each request. This logic should # be placed in your Apache config whenever possible. # http://httpd.apache.org/docs/current/howto/htaccess.html # Turning on the rewrite engine is necessary for the following rules and # features. "+FollowSymLinks" must be enabled for this to work symbolically. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> # For all files not found in the file system, reroute the request to the # "index.php" front controller, keeping the query string intact <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> In order to test it, I have created a view named about and made the proper routing. If I link to http://laravel.lar/index.php/about/ I'm routing to the about page instead if I link to http://laravel.lar/about/ I get a 404 Not Found error. I'm using a Debian based system.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 21 (sys.dm_db_partition_stats)

    - by Tamarick Hill
    The sys.dm_db_partition_stats DMV returns page count and row count information for each table or index within your database. Lets have a quick look at this DMV so we can review some of the results. **NOTE: I am going to create an ‘ObjectName’ column in our result set so that we can more easily identify tables. SELECT object_name(object_id) ObjectName, * FROM sys.dm_db_partition_stats As stated above, the first column in our result set is an Object name based on the object_id column of this result set. The partition_id column refers to the partition_id of the index in question. Each index will have at least 1 unique partition_id and will have more depending on if the object has been partitioned. The index_id column relates back to the sys.indexes table and uniquely identifies an index on a given object. A value of 0 (zero) in this column would indicate the object is a HEAP and a value of 1 (one) would signify the Clustered Index. Next is the partition_number which would signify the number of the partition for a particular object_id. Since none of my tables in my result set have been partitioned, they all display 1 for the partition_number. Next we have the in_row_data_page_count which tells us the number of data pages used to store in-row data for a given index. The in_row_used_page_count is the number of pages used to store and manage the in-row data. If we look at the first row in the result set, we will see we have 700 for this column and 680 for the previous. This means that just to manage the data (not store it) is requiring 20 pages. The next column in_row_reserved_page_count is how many pages have been reserved, regardless if they are being used or not. The next 2 columns are used for storing LOB (Large Object) data which could be text, image, varchar(max), or varbinary(max) columns. The next two columns, row_overflow, represent pages used for data that exceed the 8,060 byte row size limit for the in-row data pages. The next columns used_page_count and reserved_page_count represent the sum of the in_row, lob, and row_overflow columns discussed earlier. Lastly is a row_count column which displays the number of rows that are in a particular index. This DMV is a very powerful resource for identifying page and row count information. By knowing the page counts for indexes within your database, you are able to easily calculate the size of indexes. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms187737.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • The Legend of the Filtered Index

    - by Johnm
    Once upon a time there was a big and bulky twenty-nine million row table. He tempestuously hoarded data like a maddened shopper amid a clearance sale. Despite his leviathan nature and eager appetite he loved to share his treasures. Multitudes from all around would embark upon an epiphanous journey to sample contents of his mythical purse of knowledge. After a long day of performing countless table scans the table was overcome with fatigue. After a short period of unavailability, he decided that he needed to consider a new way to share his prized possessions in a more efficient manner. Thus, a non-clustered index was born. She dutifully directed the pilgrims that sought the table's data - no longer would those despicable table scans darken the doorsteps of this quaint village. and yet, the table's veracious appetite did not wane. Any bit or byte that wondered near him was consumed with vigor. His columns and rows continued to expand beyond the expectations of even the most liberal estimation. As his rows grew grander they became more difficult to organize and maintain. The once bright and cheerful disposition of the non-clustered index began to dim. The wait time for those who sought the table's treasures began to increase. Some of those who came to nibble upon the banquet of knowledge even timed-out and never realized their aspired enlightenment. After a period of heart-wrenching introspection, the table decided to drop the index and attempt another solution. At the darkest hour of the table's desperation came a grand flash of light. As his eyes regained their vision there stood several creatures who looked very similar to his former, beloved, non-clustered index. They all spoke in unison as they introduced themselves: "Fear not, for we come to organize your data and direct those who seek to partake in it. We are the filtered index." Immediately, the filtered indexes began to scurry about. One took control of the past quarter's data. Another took control of the previous quarter's data. All of the remaining filtered indexes followed suit. As the nearly gluttonous habits of the table scaled forward more filtered indexes appeared. Regardless of the table's size, all of the eagerly awaiting data seekers were delivered data as quickly as a Jimmy John's sandwich. The table was moved to tears. All in the land of data rejoiced and all lived happily ever after, at least until the next data challenge crept from the fearsome cave of the unknown. The End.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Partitioning tutorial - new features in Oracle Database 12c

    - by KLaker
    For data warehousing projects Oracle Partitioning really is a must-have feature because it delivers so many important benefits such as: Dramatically improves query performance and speeds up database maintenance operations Lowers costs by enabling a tiered storage approach that allows data to be stored on the most cost-effective storage for better resource utilisation Combined with Oracle Advanced Compression, it provides an automated approach to information lifecycle management using a simple, efficient, yet powerful way to manage data growth and reduce complexity and costs To help you get the most from partitioning we have released a new tutorial that covers the 12c new features. Topics include how to: Use Interval Reference Partitioning Perform Cascading TRUNCATE and EXCHANGE Operations Move Partitions Online Maintain Multiple Partitions Maintain Global Indexes Asynchronously Use Partial Indexes For more information about this tutorial follow this link to the Oracle Learning Library: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:8408,2 where you can begin your tutorial right now! For more information about Oracle Partitioning visit our home page on OTN: http://www.oracle.com/technetwork/database/bi-datawarehousing/dbbi-tech-info-part-100980.html

    Read the article

  • Why is Google still not indexing my !# website?

    - by Zubair
    I have been working on a website which uses #! (2minutecv.com), but even after 6 weeks of the site up and running and conforming to the Google hash bang guidelines stated here, you can still see that Google still hasn't indexed the site yet. For example if you use Google to search for 2MinuteCV.com benefits it does not find this page which is referenced from the homepage. Can anyone tell me why Google isn't indexing this website? Update: Thanks for al lthe help with this answer. So just to make sure I understand what is wrong. According to the answers Google never actually indexes the pages after the Javascript has run. I need to create a "shadow site" which google indexes (which google calls HTNL snapshots). If I am right in thinking this then I can pick a winner for the bounty

    Read the article

  • Deploying an EAR to JBOSS times out (org.rhq.core.pc.inventory.TimeoutException:)

    - by rangalo
    Hi, I am trying to deploy an ear file to JBOSS AS (defalut server). The application is the mavenised version of examples of SeamInAction book. When I copy the file to $JBOSS_HOME/server/default/deploy, I don't get any exception but the application doesn't respond, after some time trying to access the application from the browser gives following in the log... While deploying with admin-console (http://localhost:8080/admin-console) I get following error messgae: PS: After this Jboss gets into unusable state. I cannot even access admin-console. I just have to kill it. ErrorMessage in admin-console: Failed to create Resource Open18.ear - cause: org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ApplicationServerComponent.createResource()] with args [[CreateResourceReport: ResourceType=[ResourceType[id=0, category=Service, name=Enterprise Application (EAR), plugin=JBossAS5]], ResourceKey=[null]]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invokeInNewThreadWithLock(ResourceContainer.java:437) at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invoke(ResourceContainer.java:406) at $Proxy266.createResource(Unknown Source) at org.rhq.core.pc.inventory.CreateResourceRunner.call(CreateResourceRunner.java:113) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Error Logs: 4:08:58,555 INFO [TableMetadata] foreign keys: [fkaf42e01ba13c3380, fk_course_ref_facility] 14:08:58,555 INFO [TableMetadata] indexes: [course_pkey] 14:08:58,645 INFO [TableMetadata] table found: public.facility 14:08:58,645 INFO [TableMetadata] columns: [zip, phone, state, type, uri, city, country, id, price_range, address, county, description, nam e] 14:08:58,645 INFO [TableMetadata] foreign keys: [] 14:08:58,645 INFO [TableMetadata] indexes: [facility_pkey] 14:08:58,705 INFO [TableMetadata] table found: public.hole 14:08:58,705 INFO [TableMetadata] columns: [id, m_par, l_handicap, name, l_par, number, course_id, m_handicap] 14:08:58,705 INFO [TableMetadata] foreign keys: [fk_hole_ref_course, fk30f4c09c3f1200] 14:08:58,705 INFO [TableMetadata] indexes: [hole_pkey, uniq_hole_number] 14:08:58,764 INFO [TableMetadata] table found: public.tee 14:08:58,764 INFO [TableMetadata] columns: [hole_id, distance, tee_set_id] 14:08:58,764 INFO [TableMetadata] foreign keys: [fk1c014f8de7677, fk_tee_ref_hole, fk1c014c69de560, fk_tee_ref_tee_set] 14:08:58,764 INFO [TableMetadata] indexes: [tee_pkey] 14:08:58,826 INFO [TableMetadata] table found: public.tee_set 14:08:58,826 INFO [TableMetadata] columns: [id, color, m_slope_rating, l_slope_rating, name, course_id, m_course_rating, l_course_rating, p os] 14:08:58,826 INFO [TableMetadata] foreign keys: [fk_tee_set_ref_course, fkaa6881b79c3f1200] 14:08:58,826 INFO [TableMetadata] indexes: [tee_set_pkey, uniq_tee_set_pos, uniq_tee_set_color] 14:08:58,827 INFO [SchemaUpdate] schema update complete 14:08:58,829 INFO [NamingHelper] JNDI InitialContext properties:{java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java. naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces} 14:08:58,850 INFO [TomcatDeployment] deploy, ctxPath=/Open18 14:15:53,969 WARN [DiscoveryComponentProxyFactory] The discovery component for resource type [ResourceType[id=0, category=Service, name=Connector, plugin=JBossAS5]] has been blacklisted 14:15:53,970 WARN [InventoryManager] Failure during discovery for [Connector] Resources - failed after 300002 ms. org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ConnectorDiscoveryComponent.discoverResources()] with args [[org.rhq.core.pluginapi.inventory.ResourceDiscoveryContext@96db1]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invokeInNewThread(DiscoveryComponentProxyFactory.java:208) at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invoke(DiscoveryComponentProxyFactory.java:181) at $Proxy249.discoverResources(Unknown Source) at org.rhq.core.pc.inventory.InventoryManager.invokeDiscoveryComponent(InventoryManager.java:272) at org.rhq.core.pc.inventory.InventoryManager.executeComponentDiscovery(InventoryManager.java:1697) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:218) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:234) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.runtimeDiscover(RuntimeDiscoveryExecutor.java:134) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:94) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 14:15:53,981 WARN [NavigationContent] Unable to find node for deleted resource [Resource[id=-5, type=Connector, key=ajp://127.0.0.1:8009, name=ajp://127.0.0.1:8009, parent=JBoss Web]].

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • WAMP vhost issues with all vhost pointing to the first vhost statement

    - by Rick
    I am trying to setup vhosts in WAMPSERVER and I am running into an issue where all sites are pointing to the first vhosts and not delegating properly. Has anyone had this issue? Here is my setup. In windows hosts file: 127.0.0.1 siteabc.local 127.0.0.1 sitexyz.local In httpd-vhosts.conf: <VirtualHost 127.0.0.1> DocumentRoot "C:\Users\Rick\Documents\Projects\siteabc" ServerName siteabc.local ErrorLog "logs/siteabc-error.log" CustomLog "logs/siteabc-access.log" common <Directory "C:\Users\Rick\Documents\Projects\siteabc"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Allow from all </Directory> </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "C:\Users\Rick\Documents\Projects\sitexyz" ServerName sitexyz.local ErrorLog "logs/sitexyz-error.log" CustomLog "logs/sitexyz-access.log" common <Directory "C:\Users\Rick\Documents\Projects\sitexyz"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Allow from all </Directory> </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "C:\Users\Rick\Documents\Projects" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common <Directory "C:\Users\Rick\Documents\Projects"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Allow from all </Directory> </VirtualHost> Ok so from this setup going to siteabc works...but going to sitexyz, it still goes to siteabc. Not sure what I did wrong here. Thanks for looking.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >