Search Results

Search found 25893 results on 1036 pages for 'boot order'.

Page 195/1036 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Adding Unobtrusive Validation To MVCContrib Fluent Html

    - by srkirkland
    ASP.NET MVC 3 includes a new unobtrusive validation strategy that utilizes HTML5 data-* attributes to decorate form elements.  Using a combination of jQuery validation and an unobtrusive validation adapter script that comes with MVC 3, those attributes are then turned into client side validation rules. A Quick Introduction to Unobtrusive Validation To quickly show how this works in practice, assume you have the following Order.cs class (think Northwind) [If you are familiar with unobtrusive validation in MVC 3 you can skip to the next section]: public class Order : DomainObject { [DataType(DataType.Date)] public virtual DateTime OrderDate { get; set; }   [Required] [StringLength(12)] public virtual string ShipAddress { get; set; }   [Required] public virtual Customer OrderedBy { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note the System.ComponentModel.DataAnnotations attributes, which provide the validation and metadata information used by ASP.NET MVC 3 to determine how to render out these properties.  Now let’s assume we have a form which can edit this Order class, specifically let’s look at the ShipAddress property: @Html.LabelFor(x => x.Order.ShipAddress) @Html.EditorFor(x => x.Order.ShipAddress) @Html.ValidationMessageFor(x => x.Order.ShipAddress) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the Html.EditorFor() method is smart enough to look at the ShipAddress attributes and write out the necessary unobtrusive validation html attributes.  Note we could have used Html.TextBoxFor() or even Html.TextBox() and still retained the same results. If we view source on the input box generated by the Html.EditorFor() call, we get the following: <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="text-box single-line input-validation-error"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see, we have data-val-* attributes for both required and length, along with the proper error messages and additional data as necessary (in this case, we have the length-max=”12”). And of course, if we try to submit the form with an invalid value, we get an error on the client: Working with MvcContrib’s Fluent Html The MvcContrib project offers a fluent interface for creating Html elements which I find very expressive and useful, especially when it comes to creating select lists.  Let’s look at a few quick examples: @this.TextBox(x => x.FirstName).Class("required").Label("First Name:") @this.MultiSelect(x => x.UserId).Options(ViewModel.Users) @this.CheckBox("enabled").LabelAfter("Enabled").Title("Click to enable.").Styles(vertical_align => "middle")   @(this.Select("Order.OrderedBy").Options(Model.Customers, x => x.Id, x => x.CompanyName) .Selected(Model.Order.OrderedBy != null ? Model.Order.OrderedBy.Id : "") .FirstOption(null, "--Select A Company--") .HideFirstOptionWhen(Model.Order.OrderedBy != null) .Label("Ordered By:")) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } These fluent html helpers create the normal html you would expect, and I think they make life a lot easier and more readable when dealing with complex markup or select list data models (look ma: no anonymous objects for creating class names!). Of course, the problem we have now is that MvcContrib’s fluent html helpers don’t know about ASP.NET MVC 3’s unobtrusive validation attributes and thus don’t take part in client validation on your page.  This is not ideal, so I wrote a quick helper method to extend fluent html with the knowledge of what unobtrusive validation attributes to include when they are rendered. Extending MvcContrib’s Fluent Html Before posting the code, there are just a few things you need to know.  The first is that all Fluent Html elements implement the IElement interface (MvcContrib.FluentHtml.Elements.IElement), and the second is that the base System.Web.Mvc.HtmlHelper has been extended with a method called GetUnobtrusiveValidationAttributes which we can use to determine the necessary attributes to include.  With this knowledge we can make quick work of extending fluent html: public static class FluentHtmlExtensions { public static T IncludeUnobtrusiveValidationAttributes<T>(this T element, HtmlHelper htmlHelper) where T : MvcContrib.FluentHtml.Elements.IElement { IDictionary<string, object> validationAttributes = htmlHelper .GetUnobtrusiveValidationAttributes(element.GetAttr("name"));   foreach (var validationAttribute in validationAttributes) { element.SetAttr(validationAttribute.Key, validationAttribute.Value); }   return element; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is pretty straight forward – basically we use a passed HtmlHelper to get a list of validation attributes for the current element and then add each of the returned attributes to the element to be rendered. The Extension In Action Now let’s get back to the earlier ShipAddress example and see what we’ve accomplished.  First we will use a fluent html helper to render out the ship address text input (this is the ‘before’ case): @this.TextBox("Order.ShipAddress").Label("Ship Address:").Class("class-name") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" class="class-name"> Now let’s do the same thing except here we’ll use the newly written extension method: @this.TextBox("Order.ShipAddress").Label("Ship Address:") .Class("class-name").IncludeUnobtrusiveValidationAttributes(Html) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="class-name"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Excellent!  Now we can continue to use unobtrusive validation and have the flexibility to use ASP.NET MVC’s Html helpers or MvcContrib’s fluent html helpers interchangeably, and every element will participate in client side validation. Wrap Up Overall I’m happy with this solution, although in the best case scenario MvcContrib would know about unobtrusive validation attributes and include them automatically (of course if it is enabled in the web.config file).  I know that MvcContrib allows you to author global behaviors, but that requires changing the base class of your views, which I am not willing to do. Enjoy!

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Add IPv6 support to DirectAdmin server

    - by George Boot
    I just set up an new DirectAdmin, and I want to prepare it for IPv6 use. My ISP have gave me an range of IPv6 addresses that I can use. Lets say that address is 2a01:7c8:**:1f::. My neworkadapter user DHCP to resolves its IP-addresses. When i type ifoncig eth0 I get the following result: eth0 Link encap:Ethernet HWaddr 52:**:**:**:ce:f3 inet addr:37.**.**.44 Bcast:37.**.**.255 Mask:255.255.255.0 inet6 addr: 2a01:7c8:****:1f::/64 Scope:Global inet6 addr: fe80::5054:ff:fe87:cef3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38941 errors:0 dropped:0 overruns:0 frame:0 TX packets:29439 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3779534 (3.6 MiB) TX bytes:5089379 (4.8 MiB) As you can see, I have an IPv6 address set, but I can't ping6 an IPv6 host. I get the error: connect: Network is unreachable. I decided that I needed an gateway, so I tryed to add one: ip -6 route add default via 2a01:7c8:****::1 dev eth0 (2a01:7c8:**::1 is the gateway of my ISP). But it trows an error: RTNETLINK answers: No route to host. Does somebody know what to do, and how to solve this issue? Thanks a lot!

    Read the article

  • How do I use Java to sort surnames in alphabetical order from file to file?

    - by user577939
    I have written this code and don't know how to sort surnames in alphabetical order from my file to another file. import java.io.*; import java.util.*; class Asmuo { String pavarde; String vardas; long buvLaikas; int atv1; int atv2; int atv3; } class Irasas { Asmuo duom; Irasas kitas; } class Sarasas { private Irasas p; Sarasas() { p = null; } Irasas itrauktiElementa(String pv, String v, long laikas, int d0, int d1, int d2) { String pvrd, vrd; int data0; int data1; int data2; long lks; lks = laikas; pvrd = pv; vrd = v; data0 = d0; data1 = d1; data2 = d2; Irasas r = new Irasas(); r.duom = new Asmuo(); uzpildymasDuomenimis(r, pvrd, vrd, lks, d0, d1, d2); r.kitas = p; p = r; return r; } void uzpildymasDuomenimis(Irasas r, String pv, String v, long laik, int d0, int d1, int d2) { r.duom.pavarde = pv; r.duom.vardas = v; r.duom.atv1 = d0; r.duom.buvLaikas = laik; r.duom.atv2 = d1; r.duom.atv3 = d2; } void spausdinti() { Irasas d = p; int i = 0; try { FileWriter fstream = new FileWriter("rez.txt"); BufferedWriter rez = new BufferedWriter(fstream); while (d != null) { System.out.println(d.duom.pavarde + " " + d.duom.vardas + " " + d.duom.buvLaikas + " " + d.duom.atv1 + " " + d.duom.atv2 + " " + d.duom.atv3); rez.write(d.duom.pavarde + " " + d.duom.vardas + " " + d.duom.buvLaikas + " " + d.duom.atv1 + " " + d.duom.atv2 + " " + d.duom.atv3 + "\n"); d = d.kitas; i++; } rez.close(); } catch (Exception e) { System.err.println("Error: " + e.getMessage()); } } } public class Gyventojai { public static void main(String args[]) { Sarasas sar = new Sarasas(); Calendar atv = Calendar.getInstance(); Calendar isv = Calendar.getInstance(); try { FileInputStream fstream = new FileInputStream("duom.txt"); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String eil; while ((eil = br.readLine()) != null) { String[] cells = eil.split(" "); String pvrd = cells[0]; String vrd = cells[1]; atv.set(Integer.parseInt(cells[2]), Integer.parseInt(cells[3]), Integer.parseInt(cells[4])); isv.set(Integer.parseInt(cells[5]), Integer.parseInt(cells[6]), Integer.parseInt(cells[7])); long laik = (isv.getTimeInMillis() - atv.getTimeInMillis()) / (24 * 60 * 60 * 1000); int d0 = Integer.parseInt(cells[2]); int d1 = Integer.parseInt(cells[3]); int d2 = Integer.parseInt(cells[4]); sar.itrauktiElementa(pvrd, vrd, laik, d0, d1, d2); } in.close(); } catch (Exception e) { System.err.println("Error: " + e.getMessage()); } sar.spausdinti(); } }

    Read the article

  • HTML, CSS: how can I merge these divs in order to use float:left property on their children ?

    - by Patrick
    hi, I've 2 sets of thumbnails and in each set I'm displaying them one nearby each other in 4 columns using float:left. I would like to "merge" the 2 sets (but I cannot change the html code) because I want the thumbnails of the second set floating right after the last thumbnail of the first set. In other terms, if in the last row there are only 2 thumbnails and the last 2 columns are empty, the thumbnails of the second set should fill the empty columns of the last row of the first set. This is the code... <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <a rel="lightbox[field_image][First image&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;/lancelmaat/content/stalkshow&quot; id=&quot;node_link_text&quot; class=&quot;active&quot;&gt;View Image Details&lt;/a&gt;]" href="http://localhost/lancelmaat/sites/default/files/files/projects/Stalkshow/images/LPrisPetjong.jpeg" class="lightbox-processed"><img width="89" height="89" title="" alt="First image" src="http://localhost/lancelmaat/sites/default/files/imagecache/galleryImage/files/projects/Stalkshow/images/LPrisPetjong.jpeg"></a> </div> <div class="field-item even"> <a rel="lightbox[field_image][Second image&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;/lancelmaat/content/stalkshow&quot; id=&quot;node_link_text&quot; class=&quot;active&quot;&gt;View Image Details&lt;/a&gt;]" href="http://localhost/lancelmaat/sites/default/files/files/projects/Stalkshow/images/SeoulLEDScreen2a.jpeg" class="lightbox-processed"><img width="89" height="89" title="" alt="Second image" src="http://localhost/lancelmaat/sites/default/files/imagecache/galleryImage/files/projects/Stalkshow/images/SeoulLEDScreen2a.jpeg"></a> </div> <div class="field-item odd"> <a rel="lightbox[field_image][Third image&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;/lancelmaat/content/stalkshow&quot; id=&quot;node_link_text&quot; class=&quot;active&quot;&gt;View Image Details&lt;/a&gt;]" href="http://localhost/lancelmaat/sites/default/files/files/projects/Stalkshow/images/SeoulSKT6.jpeg" class="lightbox-processed"><img width="89" height="89" title="" alt="Third image" src="http://localhost/lancelmaat/sites/default/files/imagecache/galleryImage/files/projects/Stalkshow/images/SeoulSKT6.jpeg"></a> </div> </div> <!-- second set --> <div class="field field-type-filefield field-field-video"> <div class="field-items"> <div class="field-item odd"> <a rel="lightbox[field_video][Video Number 1&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;/lancelmaat/content/stalkshow&quot; id=&quot;node_link_text&quot; class=&quot;active&quot;&gt;View Image Details&lt;/a&gt;]" href="http://localhost/lancelmaat/sites/default/files/files/projects/Stalkshow/videos/StalkSeoul8d1Mbps.flv" class="lightbox-processed"><img title="" alt="Video Number 1" src="http://localhost/lancelmaat/sites/default/files/imagecache/galleryVideo/files/projects/Stalkshow/videos/StalkSeoul8d1Mbps.flv"></a> </div> <div class="field-item even"> <a rel="lightbox[field_video][Video Number 2&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;/lancelmaat/content/stalkshow&quot; id=&quot;node_link_text&quot; class=&quot;active&quot;&gt;View Image Details&lt;/a&gt;]" href="http://localhost/lancelmaat/sites/default/files/files/projects/Stalkshow/videos/stalkshowdvd21Mbps.flv" class="lightbox-processed"><img title="" alt="Video Number 2" src="http://localhost/lancelmaat/sites/default/files/imagecache/galleryVideo/files/projects/Stalkshow/videos/stalkshowdvd21Mbps.flv"></a> </div> <div class="field-item odd"> <a rel="lightbox[field_video][Video Number 3&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;/lancelmaat/content/stalkshow&quot; id=&quot;node_link_text&quot; class=&quot;active&quot;&gt;View Image Details&lt;/a&gt;]" href="http://localhost/lancelmaat/sites/default/files/files/projects/Stalkshow/videos/StalkShowMoscow1Mbps.flv" class="lightbox-processed"><img title="" alt="Video Number 3" src="http://localhost/lancelmaat/sites/default/files/imagecache/galleryVideo/files/projects/Stalkshow/videos/StalkShowMoscow1Mbps.flv"></a> </div> </div> </div> How can I merge these divs in order to use float:left property on their children ? thanks

    Read the article

  • PXE with WDS & Windows Server 2012 - no filename option in DHCP lease?

    - by user1799
    I'm trying to configure Windows Server 2012 (a virtual box VM) with WDS so I can PXE boot some Windows 7 VMs (also virtual box). All the machines involved are only attached to the "host only network", 192.168.56.0/24. The Server 2012 machine has been setup as an AD DS machine, has DNS installed and working along with DHCP with option 60 - PXEClient - set and WDS is set to not listen on DHCP ports. I've followed http://technet.microsoft.com/en-us/library/jj648426.aspx very closely. I've used the boot.wim and install.wim files from the Win 7 installation DVD and they're configured as 'boot' and 'installation' images respectively. When I boot the target machine, it gets an IP address, but I simply get 'no filename' and the boot won't proceed any further. I've tried setting option 66 to 192.168.56.2 (the WDS server) and option 67 to both Boot\x64\wdsnbp.com and Boot\x64\pxeboot.n12 but all to no avail. I can't seem to see anything in the event log, either. Can anyone out there spot what I'm doing wrong? Or give tips to narrow down a diagnostic?

    Read the article

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • Disabling CPU management

    - by Tiffany Walker
    If I add the following processor.max_cstate=0 to the kernel command line for boot up, does that disable all CPU power management and throttling? I also found: http://www.experts-exchange.com/OS/Linux/Administration/A_3492-Avoiding-CPU-speed-scaling-in-modern-Linux-distributions-Running-CPU-at-full-speed-Tips.html The link talks of Change CPU governor from 'ondemand' to 'performance' for all CPUs/cores and disabling kondemand from kernel. Server is for web hosting UPDATES: 2.6.32-379.1.1.lve1.1.7.6.el6.x86_64 #1 SMP Sat Aug 4 09:56:37 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux . # dmidecode 2.11 SMBIOS 2.6 present. 74 structures occupying 2878 bytes. Table at 0x0009F000. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: American Megatrends Inc. Version: 1.0c Release Date: 05/27/2010 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 4096 kB Characteristics: ISA is supported PCI is supported PNP is supported BIOS is upgradeable BIOS shadowing is allowed ESCD support is available Boot from CD is supported Selectable boot is supported BIOS ROM is socketed EDD is supported 5.25"/1.2 MB floppy services are supported (int 13h) 3.5"/720 kB floppy services are supported (int 13h) 3.5"/2.88 MB floppy services are supported (int 13h) Print screen service is supported (int 5h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) Printer services are supported (int 17h) CGA/mono video services are supported (int 10h) ACPI is supported USB legacy is supported LS-120 boot is supported ATAPI Zip drive boot is supported BIOS boot specification is supported Targeted content distribution is supported BIOS Revision: 8.16 Handle 0x0001, DMI type 1, 27 bytes System Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: 0123456789 UUID: 49434D53-0200-9033-2500-33902500D52C Wake-up Type: Power Switch SKU Number: To Be Filled By O.E.M. Family: To Be Filled By O.E.M. Handle 0x0002, DMI type 2, 15 bytes Base Board Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: VM11S61561 Asset Tag: To Be Filled By O.E.M. Features: Board is a hosting board Board is replaceable Location In Chassis: To Be Filled By O.E.M. Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Handle 0x0003, DMI type 3, 21 bytes Chassis Information Manufacturer: Supermicro Type: Sealed-case PC Lock: Not Present Version: 0123456789 Serial Number: 0123456789 Asset Tag: To Be Filled By O.E.M. Boot-up State: Safe Power Supply State: Safe Thermal State: Safe Security Status: None OEM Information: 0x00000000 Height: Unspecified Number Of Power Cords: 1 Contained Elements: 0

    Read the article

  • Re-installing Windows on an old laptop

    - by Khaled
    I have an old laptop and I want to re-install Windows XP on it. The problem is that this laptop does not have an optical drive. I checked the boot sequence in the BIOS. It does not show an option to boot from USB. It have only two options: Boot from HD. Boot using Realtek agent (network boot). I tried to copy the Windows CD to second drive D:\ and run the installed from there. However, I could not format the C:\ drive. Windows complaints about setup files will be removed or something like that. I tried to boot the laptop using PXE, but I could not. It seems that the DHCP request did not get answered. I thought I could use a USB CD-ROM drive (I don't have one to try), but it might not work as there is no option to boot from USB. Do you think it will work? Do I have other options to try? Any recommendations?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Python regex to parse text file, get the items in list and count the list

    - by Nemo
    I have a text file which contains some data. I m particularly interested in finding the count of the number of items in v_dims v_dims pattern in my text file looks like this : v_dims={ "Sales", "Product Family", "Sales Organization", "Region", "Sales Area", "Sales office", "Sales Division", "Sales Person", "Sales Channel", "Sales Order Type", "Sales Number", "Sales Person", "Sales Quantity", "Sales Amount" } So I m thinking of getting all the elements in v_dims and dumping them out in a Python list. Then compute the len(mylist) to get the count of the items. The challenge is in getting all the elements of v_dims from my text file and putting them in an empty list. I m particularly interested in items in v_dims in my text file. The text file has data in the form of v_dims pattern i showed in my original post. Some data has nested patterns of v_dims. Thanks. Here's what I have tried and failed. Any help is appreciated. TIA. import re fname = "C:\Users\XXXX\Test.mrk" with open(fname, "r") as fo: content_as_string = fo.read() match = re.findall(r'v_dims={\"(.+?)\"}',content_as_string) Though I have a big text file, Here's a snippet of what's the structure of my text file version "1"; // Computer generated object language file object 'MRKR' "Main" { Data_Type=2, HeaderBlock={ Version_String="6.3 (25)" }, Printer_Info={ Orientation=0, Page_Width=8.50000000, Page_Height=11.00000000, Page_Header="", Page_Footer="", Margin_type=0, Top_Margin=0.50000000, Left_Margin=0.50000000, Bottom_Margin=0.50000000, Right_Margin=0.50000000 }, Marker_Options={ Close_All="TRUE", Hide_Console="FALSE", Console_Left="FALSE", Console_Width=217, Main_Style="Maximized", MDI_Rect={ 0, 0, 892, 1063 } }, Dives={ { Dive="A", Windows={ { View_Index=0, Window_Info={ Window_Rect={ 0, -288, 400, 1008 }, Window_Style="Maximized Front", Window_Name="Theater [Previous Qtr Diveplan-Dive A]" }, Dependent_bool="FALSE", Colset={ Dive_Type="Normal", Dimension_Name="Theater", Action_List={ Actions={ { Action_Type="Select", select_type=5 }, { Action_Type="Select", select_type=0, Key_Names={ "Theater" }, Key_Indexes={ { "AMERICAS" } } }, { Action_Type="Focus", Focus_Rows="True" }, { Action_Type="Dimensions", v_dims={ "Theater", "Product Family", "Division", "Region", "Install at Country Name", "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "PS Flag", "Avalanche Flag", "Product Item Family" }, Xtab_Bool="False", Xtab_Flip="False" }, { Action_Type="Select", select_type=5 }, { Action_Type="Select", select_type=0, Key_Names={ "Theater", "Product Family", "Division", "Region", "Install at Country Name", "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "PS Flag", "Avalanche Flag" }, Key_Indexes={ { "AMERICAS", "ATMOS", "Latin America CS Division", "37000 CS Region", "Mexico", "", "", "", "", "DIRECT", "EMC", "N", "0" } } } } }, Num_Palette_cols=0, Num_Palette_rows=0 }, Format={ Window_Type="Tabular", Tabular={ Num_row_labels=8 } } } } } }, Widget_Set={ Widget_Layout="Vertical", Go_Button=1, Picklist_Width=0, Sort_Subset_Dimensions="TRUE", Order={ } }, Views={ { Data_Type=1, dbname="Previous Qtr Diveplan", diveline_dbname="Current Qtr Diveplan", logical_name="Current Qtr Diveplan", cols={ { name="Total TSS installs", column_type="Calc[Total TSS installs]", output_type="Number", format_string="." }, { name="TSS Valid Connectivity Records", column_type="Calc[TSS Valid Connectivity Records]", output_type="Number", format_string="." }, { name="% TSS Connectivity Record", column_type="Calc[% TSS Connectivity Record]", output_type="Number" }, { name="TSS Not Applicable", column_type="Calc[TSS Not Applicable]", output_type="Number", format_string="." }, { name="TSS Customer Refusals", column_type="Calc[TSS Customer Refusals]", output_type="Number", format_string="." }, { name="% TSS Refusals", column_type="Calc[% TSS Refusals]", output_type="Number" }, { name="TSS Eligible for Physical Connectivity", column_type="Calc[TSS Eligible for Physical Connectivity]", output_type="Number", format_string="." }, { name="TSS Boxes with Physical Connectivty", column_type="Calc[TSS Boxes with Physical Connectivty]", output_type="Number", format_string="." }, { name="% TSS Physical Connectivity", column_type="Calc[% TSS Physical Connectivity]", output_type="Number" } }, dim_cols={ { name="Model", column_type="Dimension[Model]", output_type="None" }, { name="Model", column_type="Dimension[Model]", output_type="None" }, { name="Connect In Type", column_type="Dimension[Connect In Type]", output_type="None" }, { name="Connect Home Type", column_type="Dimension[Connect Home Type]", output_type="None" }, { name="SymmConnect Enabled", column_type="Dimension[SymmConnect Enabled]", output_type="None" }, { name="Theater", column_type="Dimension[Theater]", output_type="None" }, { name="Division", column_type="Dimension[Division]", output_type="None" }, { name="Region", column_type="Dimension[Region]", output_type="None" }, { name="Sales Order Number", column_type="Dimension[Sales Order Number]", output_type="None" }, { name="Product Item Family", column_type="Dimension[Product Item Family]", output_type="None" }, { name="Item Serial Number", column_type="Dimension[Item Serial Number]", output_type="None" }, { name="Sales Order Deal Number", column_type="Dimension[Sales Order Deal Number]", output_type="None" }, { name="Item Install Date", column_type="Dimension[Item Install Date]", output_type="None" }, { name="SYR Last Dial Home Date", column_type="Dimension[SYR Last Dial Home Date]", output_type="None" }, { name="Maintained By Group", column_type="Dimension[Maintained By Group]", output_type="None" }, { name="PS Flag", column_type="Dimension[PS Flag]", output_type="None" }, { name="Connect Home Refusal Reason", column_type="Dimension[Connect Home Refusal Reason]", output_type="None", col_width=177 }, { name="Cust Name", column_type="Dimension[Cust Name]", output_type="None" }, { name="Sales Order Channel Type", column_type="Dimension[Sales Order Channel Type]", output_type="None" }, { name="Sales Order Type", column_type="Dimension[Sales Order Type]", output_type="None" }, { name="Part Model Key", column_type="Dimension[Part Model Key]", output_type="None" }, { name="Ship Date", column_type="Dimension[Ship Date]", output_type="None" }, { name="Model Number", column_type="Dimension[Model Number]", output_type="None" }, { name="Item Description", column_type="Dimension[Item Description]", output_type="None" }, { name="Customer Classification", column_type="Dimension[Customer Classification]", output_type="None" }, { name="CS Customer Name", column_type="Dimension[CS Customer Name]", output_type="None" }, { name="Install At Customer Number", column_type="Dimension[Install At Customer Number]", output_type="None" }, { name="Install at Country Name", column_type="Dimension[Install at Country Name]", output_type="None" }, { name="TLA Serial Number", column_type="Dimension[TLA Serial Number]", output_type="None" }, { name="Product Version", column_type="Dimension[Product Version]", output_type="None" }, { name="Avalanche Flag", column_type="Dimension[Avalanche Flag]", output_type="None" }, { name="Product Family", column_type="Dimension[Product Family]", output_type="None" }, { name="Project Number", column_type="Dimension[Project Number]", output_type="None" }, { name="PROJECT_STATUS", column_type="Dimension[PROJECT_STATUS]", output_type="None" } }, Available_Columns={ "Total TSS installs", "TSS Valid Connectivity Records", "% TSS Connectivity Record", "TSS Not Applicable", "TSS Customer Refusals", "% TSS Refusals", "TSS Eligible for Physical Connectivity", "TSS Boxes with Physical Connectivty", "% TSS Physical Connectivity", "Total Installs", "All Boxes with Valid Connectivty Record", "% All Connectivity Record", "Overall Refusals", "Overall Refusals %", "All Eligible for Physical Connectivty", "Boxes with Physical Connectivity", "% All with Physical Conectivity" }, Remaining_columns={ { name="Total Installs", column_type="Calc[Total Installs]", output_type="Number", format_string="." }, { name="All Boxes with Valid Connectivty Record", column_type="Calc[All Boxes with Valid Connectivty Record]", output_type="Number", format_string="." }, { name="% All Connectivity Record", column_type="Calc[% All Connectivity Record]", output_type="Number" }, { name="Overall Refusals", column_type="Calc[Overall Refusals]", output_type="Number", format_string="." }, { name="Overall Refusals %", column_type="Calc[Overall Refusals %]", output_type="Number" }, { name="All Eligible for Physical Connectivty", column_type="Calc[All Eligible for Physical Connectivty]", output_type="Number" }, { name="Boxes with Physical Connectivity", column_type="Calc[Boxes with Physical Connectivity]", output_type="Number" }, { name="% All with Physical Conectivity", column_type="Calc[% All with Physical Conectivity]", output_type="Number" } }, calcs={ { name="Total TSS installs", definition="Total[Total TSS installs]", ts_flag="Not TS Calc" }, { name="TSS Valid Connectivity Records", definition="Total[PS Boxes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="% TSS Connectivity Record", definition="Total[PS Boxes w/ valid connectivity record (1=yes)] /Total[Total TSS installs]", ts_flag="Not TS Calc" }, { name="TSS Not Applicable", definition="Total[Bozes w/ valid connectivity record (1=yes)]-Total[Boxes Eligible (1=yes)]-Total[TSS Refusals]", ts_flag="Not TS Calc" }, { name="TSS Customer Refusals", definition="Total[TSS Refusals]", ts_flag="Not TS Calc" }, { name="% TSS Refusals", definition="Total[TSS Refusals]/Total[PS Boxes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="TSS Eligible for Physical Connectivity", definition="Total[TSS Eligible]-Total[Exception]", ts_flag="Not TS Calc" }, { name="TSS Boxes with Physical Connectivty", definition="Total[PS Physical Connectivity] - Total[PS Physical Connectivity, SymmConnect Enabled=\"Capable not enabled\"]", ts_flag="Not TS Calc" }, { name="% TSS Physical Connectivity", definition="Total[Boxes w/ phys conn]/Total[Boxes Eligible (1=yes)]", ts_flag="Not TS Calc" }, { name="Total Installs", definition="Total[Total Installs]", ts_flag="Not TS Calc" }, { name="All Boxes with Valid Connectivty Record", definition="Total[Bozes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="% All Connectivity Record", definition="Total[Bozes w/ valid connectivity record (1=yes)]/Total[Total Installs]", ts_flag="Not TS Calc" }, { name="Overall Refusals", definition="Total[Overall Refusals]", ts_flag="Not TS Calc" }, { name="Overall Refusals %", definition="Total[Overall Refusals]/Total[Bozes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="All Eligible for Physical Connectivty", definition="Total[Boxes Eligible (1=yes)]-Total[Exception]", ts_flag="Not TS Calc" }, { name="Boxes with Physical Connectivity", definition="Total[Boxes w/ phys conn]-Total[Boxes w/ phys conn,SymmConnect Enabled=\"Capable not enabled\"]", ts_flag="Not TS Calc" }, { name="% All with Physical Conectivity", definition="Total[Boxes w/ phys conn]/Total[Boxes Eligible (1=yes)]", ts_flag="Not TS Calc" } }, merge_type="consolidate", merge_dbs={ { dbname="connectivityallproducts.mdl", diveline_dbname="/DI_PSREPORTING/connectivityallproducts.mdl" } }, skip_constant_columns="FALSE", categories={ { name="Geography", dimensions={ "Theater", "Division", "Region", "Install at Country Name" } }, { name="Mappings and Flags", dimensions={ "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "Customer Installable", "PS Flag", "Top Level Flag", "Avalanche Flag" } }, { name="Product Information", dimensions={ "Product Family", "Product Item Family", "Product Version", "Item Description" } }, { name="Sales Order Info", dimensions={ "Sales Order Deal Number", "Sales Order Number", "Sales Order Type" } }, { name="Dates", dimensions={ "Item Install Date", "Ship Date", "SYR Last Dial Home Date" } }, { name="Details", dimensions={ "Item Serial Number", "TLA Serial Number", "Part Model Key", "Model Number" } }, { name="Customer Infor", dimensions={ "CS Customer Name", "Install At Customer Number", "Customer Classification", "Cust Name" } }, { name="Other Dimensions", dimensions={ "Model" } } }, Maintain_Category_Order="FALSE", popup_info="false" } } };

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • "domain crashed" when creating new Xen instance

    - by user47650
    I have downloaded a Xen virtual machine image from the appscale project, and I am trying to start it up. However once I run the command; xm create -c -f xen.conf The instance immediately crashes and provides no console output. however it produces logs that I have posted below. but this is the error; [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. I have managed to mount the root.img file locally and verify that it is actually an ext3 file system. I am running Xen 3.0.3 that is a stock RPM from the CentOS 5 repos; # rpm -qa | grep -i xen xen-libs-3.0.3-105.el5_5.5 xen-3.0.3-105.el5_5.5 xen-libs-3.0.3-105.el5_5.5 kernel-xen-2.6.18-194.32.1.el5 any suggestions on how to proceed with troubleshooting? (i am a newbie to Xen) so far I have enabled console logging, but the log file is empty. ==> domain-builder-ng.log <== xc_dom_allocate: cmdline=" ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console", features="" xc_dom_kernel_file: filename="/boot/vmlinuz-2.6.27-7-server" xc_dom_malloc_filemap : 2284 kB xc_dom_ramdisk_file: filename="/boot/initrd.img-2.6.27-7-server" xc_dom_malloc_filemap : 9005 kB xc_dom_boot_xen_init: ver 3.1, caps xen-3.0-x86_64 xen-3.0-x86_32p xc_dom_parse_image: called xc_dom_find_loader: trying ELF-generic loader ... failed xc_dom_find_loader: trying Linux bzImage loader ... xc_dom_malloc : 9875 kB xc_dom_do_gunzip: unzip ok, 0x234bb2 -> 0x9a4de0 OK elf_parse_binary: phdr: paddr=0x200000 memsz=0x447000 elf_parse_binary: phdr: paddr=0x647000 memsz=0xab888 elf_parse_binary: phdr: paddr=0x6f3000 memsz=0x908 elf_parse_binary: phdr: paddr=0x6f4000 memsz=0x1c2f9c elf_parse_binary: memory: 0x200000 -> 0x8b6f9c elf_xen_parse_note: GUEST_OS = "linux" elf_xen_parse_note: GUEST_VERSION = "2.6" elf_xen_parse_note: XEN_VERSION = "xen-3.0" elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 elf_xen_parse_note: ENTRY = 0xffffffff8071e200 elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff80209000 elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" elf_xen_parse_note: PAE_MODE = "yes" elf_xen_parse_note: LOADER = "generic" elf_xen_parse_note: unknown xen elf note (0xd) elf_xen_parse_note: SUSPEND_CANCEL = 0x1 elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 elf_xen_parse_note: PADDR_OFFSET = 0x0 elf_xen_addr_calc_check: addresses: virt_base = 0xffffffff80000000 elf_paddr_offset = 0x0 virt_offset = 0xffffffff80000000 virt_kstart = 0xffffffff80200000 virt_kend = 0xffffffff808b6f9c virt_entry = 0xffffffff8071e200 xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff80200000 -> 0xffffffff808b6f9c xc_dom_mem_init: mem 1024 MB, pages 0x40000 pages, 4k each xc_dom_mem_init: 0x40000 pages xc_dom_boot_mem_init: called x86_compat: guest xen-3.0-x86_64, address size 64 xc_dom_malloc : 2048 kB ==> xend.log <== [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 9 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:207) XendDomainInfo.create(['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]]) [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:329) parseConfig: config is ['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]] [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:446) parseConfig: result is {'features': '', 'image': ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']], 'cpus': [], 'vcpu_avail': 1, 'backend': [], 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'cpu_weight': 256.0, 'memory': 1024, 'cpu_cap': 0, 'localtime': None, 'timer_mode': None, 'start_time': 1299011639.9200001, 'on_poweroff': 'destroy', 'on_crash': 'restart', 'device': [('vif', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]), ('vbd', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']])], 'bootloader': None, 'maxmem': 1024, 'shadow_memory': 0, 'name': 'appscale-1.4b', 'bootloader_args': None, 'vcpus': 1, 'cpu': None} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1784) XendDomainInfo.construct: None [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034420 KiB free; need 4096; done. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1953) XendDomainInfo.initDomain: 10 256.0 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1994) _initDomain:shadow_memory=0x0, maxmem=0x400, memory=0x400. [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034412 KiB free; need 1048576; done. [2011-03-01 12:34:02 xend 3580] INFO (image:139) buildDomain os=linux dom=10 vcpus=1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:208) domid = 10 [2011-03-01 12:34:02 xend 3580] DEBUG (image:209) memsize = 1024 [2011-03-01 12:34:02 xend 3580] DEBUG (image:210) image = /boot/vmlinuz-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:211) store_evtchn = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:212) console_evtchn = 2 [2011-03-01 12:34:02 xend 3580] DEBUG (image:213) cmdline = ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console [2011-03-01 12:34:02 xend 3580] DEBUG (image:214) ramdisk = /boot/initrd.img-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:215) vcpus = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:216) features = ==> domain-builder-ng.log <== xc_dom_build_image: called xc_dom_alloc_segment: kernel : 0xffffffff80200000 -> 0xffffffff808b7000 (pfn 0x200 + 0x6b7 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x200+0x6b7 at 0x2aaaab5f6000 elf_load_binary: phdr 0 at 0x0x2aaaab5f6000 -> 0x0x2aaaaba3d000 elf_load_binary: phdr 1 at 0x0x2aaaaba3d000 -> 0x0x2aaaabae8888 elf_load_binary: phdr 2 at 0x0x2aaaabae9000 -> 0x0x2aaaabae9908 elf_load_binary: phdr 3 at 0x0x2aaaabaea000 -> 0x0x2aaaabb9a004 xc_dom_alloc_segment: ramdisk : 0xffffffff808b7000 -> 0xffffffff82382000 (pfn 0x8b7 + 0x1acb pages) xc_dom_malloc : 160 kB xc_dom_pfn_to_ptr: domU mapping: pfn 0x8b7+0x1acb at 0x2aaab0000000 xc_dom_do_gunzip: unzip ok, 0x8cb5e7 -> 0x1aca210 xc_dom_alloc_segment: phys2mach : 0xffffffff82382000 -> 0xffffffff82582000 (pfn 0x2382 + 0x200 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2382+0x200 at 0x2aaab1acb000 xc_dom_alloc_page : start info : 0xffffffff82582000 (pfn 0x2582) xc_dom_alloc_page : xenstore : 0xffffffff82583000 (pfn 0x2583) xc_dom_alloc_page : console : 0xffffffff82584000 (pfn 0x2584) nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) xc_dom_alloc_segment: page tables : 0xffffffff82585000 -> 0xffffffff8259c000 (pfn 0x2585 + 0x17 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2585+0x17 at 0x2aaab1ccb000 xc_dom_alloc_page : boot stack : 0xffffffff8259c000 (pfn 0x259c) xc_dom_build_image : virt_alloc_end : 0xffffffff8259d000 xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 xc_dom_boot_image: called arch_setup_bootearly: doing nothing xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches xc_dom_compat_check: supported guest type: xen-3.0-x86_32p xc_dom_update_guest_p2m: dst 64bit, pages 0x40000 clear_page: pfn 0x2584, mfn 0x11d788 clear_page: pfn 0x2583, mfn 0x11d789 xc_dom_pfn_to_ptr: domU mapping: pfn 0x2582+0x1 at 0x2aaab1ce2000 start_info_x86_64: called setup_hypercall_page: vaddr=0xffffffff80209000 pfn=0x209 domain builder memory footprint allocated malloc : 12139 kB anon mmap : 0 bytes mapped file mmap : 11289 kB domU mmap : 35 MB arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xd6fe1 shared_info_x86_64: called vcpu_x86_64: called vcpu_x86_64: cr3: pfn 0x2585 mfn 0x11d787 launch_vm: called, ctxt=0x97b21f8 xc_dom_release: called ==> xend.log <== [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'mac': '00:16:3B:72:10:E4', 'handle': '0', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vif/10/0'} to /local/domain/10/device/vif/0. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'handle': '0', 'script': '/etc/xen/scripts/vif-bridge', 'state': '1', 'frontend': '/local/domain/10/device/vif/0', 'mac': '00:16:3B:72:10:E4', 'online': '1', 'frontend-id': '10'} to /local/domain/0/backend/vif/10/0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:634) Checking for duplicate for uname: /local/xen/domains/appscale1.4/root.img [file:/local/xen/domains/appscale1.4/root.img], dev: sda1:disk, mode: w [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1:disk: [Errno 2] No such file or directory: '/dev/sda1:disk' [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1: [Errno 2] No such file or directory: '/dev/sda1' [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'virtual-device': '2049', 'device-type': 'disk', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vbd/10/2049'} to /local/domain/10/device/vbd/2049. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'frontend': '/local/domain/10/device/vbd/2049', 'format': 'raw', 'dev': 'sda1', 'state': '1', 'params': '/local/xen/domains/appscale1.4/root.img', 'mode': 'w', 'online': '1', 'frontend-id': '10', 'type': 'file'} to /local/domain/0/backend/vbd/10/2049. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:993) Storing VM details: {'shadow_memory': '0', 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'start_time': '1299011642.74', 'on_poweroff': 'destroy', 'name': 'appscale-1.4b', 'xend/restart_count': '0', 'vcpus': '1', 'vcpu_avail': '1', 'memory': '1024', 'on_crash': 'restart', 'image': "(linux (kernel /boot/vmlinuz-2.6.27-7-server) (ramdisk /boot/initrd.img-2.6.27-7-server) (ip :1.2.3.4::::eth0:dhcp) (root '/dev/sda1 ro') (args 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'))", 'maxmem': '1024'} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1028) Storing domain details: {'console/ring-ref': '1169288', 'console/port': '2', 'name': 'appscale-1.4b', 'console/limit': '1048576', 'vm': '/vm/d5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'domid': '10', 'cpu/0/availability': 'online', 'memory/target': '1048576', 'store/ring-ref': '1169289', 'store/port': '1'} [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:158) Waiting for devices vif. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:164) Waiting for 0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1250) XendDomainInfo.handleShutdownWatch [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices usb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:164) Waiting for 2049. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices irq. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vkbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vfb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices pci. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices ioports. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices tap. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vtpm. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] ERROR (XendDomainInfo:2654) VM appscale-1.4b restarting too fast (2.275545 seconds since the last restart). Refusing to restart to avoid loops. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2189) XendDomainInfo.destroy: domid=10 ==> xen-hotplug.log <== Nothing to flush. ==> xend.log <== [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 10 [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... And this is the xen.conf file that I am using; # cat xen.conf # Configuration file for the Xen instance AppScale, created # bn VMBuilder kernel = '/boot/vmlinuz-2.6.27-7-server' ramdisk = '/boot/initrd.img-2.6.27-7-server' memory = 1024 vcpus = 1 root = '/dev/sda1 ro' disk = [ 'file:/local/xen/domains/appscale1.4/root.img,sda1,w', ] name = 'appscale-1.4b' dhcp = 'dhcp' vif = [ 'mac=00:16:3B:72:10:E4' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' extra = 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'

    Read the article

  • Uninitialized constant Item::Types

    - by Rasmus
    Hi! First of, im a newbie ruby programmer so please bare with me if this is a very dumb question. I get this uninitialized constant error when i submit my nested forms. order.rb class Order < ActiveRecord::Base has_many :items, :dependent => :destroy has_many :types, :through => :items accepts_nested_attributes_for :items accepts_nested_attributes_for :types validates_associated :items validates_associated :types end item.rb class Item < ActiveRecord::Base has_one :types belongs_to :order accepts_nested_attributes_for :types validates_associated :types end type.rb class Type < ActiveRecord::Base belongs_to :items belongs_to :orders end new.erb.html <% form_for @order do |f| %> <%= f.error_messages %> <% f.fields_for :items do |builder| %> <table border="0"> <th>Type</th> <th>Amount</th> <th>Text</th> <th>Price</th> <tr> <% f.fields_for :type do |m| %> <td> <%= m.collection_select :type, Type.find(:all, :order => "created_at DESC"), :id, :name, {:prompt => "Select a Type" }, {:id => "selector", :onchange => "type_change(this)"} %> </td> <% end %> <td> <%= f.text_field :amount, :id => "amountField", :onchange => "change_total_price()" %> </td> <td> <%= f.text_field :text, :id => "textField" %> </td> <td> <%= f.text_field :price, :class => "priceField", :onChange => "change_total_price()" %> </td> <td> <%= link_to_remove_fields "Remove Item", f %> </td> </tr> </table> <% end %> <p><%= link_to_add_fields "Add Item", f, :items %></p> <p> <%= f.label :total_price %><br /> <%= f.text_field :total_price, :class => "priceField", :id => "totalPrice" %> </p> <p><%= f.submit "Create"%></p> <% end %> <%= link_to 'Back', orders_path %> create method in orders_controller.rb def create @order = Order.new(params[:order]) respond_to do |format| if @order.save flash[:notice] = 'Post was successfully created.' format.html { redirect_to(@order) } format.xml { render :xml => @order, :status => :created, :location => @order } else format.html { render :action => "new" } format.xml { render :xml => @order.errors, :status => :unprocessable_entity } end end end Hopefully you can see what i cant

    Read the article

  • What’s the password for my FileVault 2 boot volume?

    - by cbowns
    Apple’s support document for FileVault 2 (a.k.a. “full disk encryption” or “FDE”) has lots of information about enabling FDE and what it means for booting the machine. However, it doesn’t cover one very important thing I’m trying to do: mount the drive in the Recovery HD environment to reinstall OS X on it. The Recovery HD environment asks me for the volume passphrase so it can mount my drive and try to install OS X onto it. If this were an external drive which I’d manually enabled FDE on with diskutil, or an external Time Machine volume, I’d know it because it makes you pick one (just like a regular login password), but FileVault 2 never asked me for a volume passphrase (I assume it selects one behind the scenes). I’ve tried my main user’s password, but that doesn’t work, and neither does the recovery key set for the volume. Keychain Access doesn’t have anything that I could find. How do I unlock this volume?

    Read the article

  • Fedora 12 How to Force a Display to Load on Boot even when no Monitor is Attached

    - by key
    When booting Fedora 12 with no monitor connected, if I connect a monitor later it gives a no signal error. I think it is set to auto-detect if a monitor is connected or not and load a display based of that. Since it initially detects no monitor no display is loaded. What I would like to do is have it act as if a monitor was always connected and provide the display signal always. I don't know if this is the issue or not but if you know how I might be able to fix this issue so that I can connect a monitor whenever I feel like it without having to restart that would be best. Thanks for the help! :D

    Read the article

  • Find or restore wiki pages. (Computer will not boot anymore and need the wiki pages)

    - by Nathan187
    A few years ago, where I work, I created a wiki for me and my co-workers. We work on a lot of old programs and to help with cross training, we put a lot of our notes in the wiki. Sadly, the wiki was hosted on my machine and my machine has died. I can pull the drive out and hook it up to an enclosure and still see the files, etc. I want to know...is there a way to get the files/pages from that wiki somehow. I think they are stored in a mysql database somewhere. Yeah it sucks and I had a lot of stuff on that drive but the most important thing for me now is to get those notes (wiki pages). Any help would be appreciated.

    Read the article

  • Ubuntu - why would /var/log/dmesg stop updating after boot? does not show panic/cpu_hung errors which the console shows

    - by Tom G
    So I have an Ubuntu 10.04 install VM on a host. Latest 2.6.38-15-server kernel . /var/log/dmesg displays only the bootup but will stop recording after that. It will not show the trace/cpu_hung errors I am trying to troubleshoot. /var/log/dmesg.0 , dmesg.1 nothing - I did a string search for the text that displays on the console during the crash and NOTHING gets logged anywhere in /var/log/* . I have to call into the provider and ask them to take a screenshot of the console since nothing shows in dmesg. Why would /var/log/dmesg not record kernel panics, or such?

    Read the article

  • Windows Vista Help, OS Installation

    - by Darknight1366
    I've brought Vista Ultimate DVD from my friend, there was a .iso file. I extracted them and found two files(there was some boot files too) : boot.wim install.wim I extracted the boot.wim with WinMount. But I can't extracting the install.wim Should I Burn the whole files(extracted and past files) into a DVD Or, Should I try to extract the install.wim ???? I searched Google, some forums and websites said that the setup.exe can extract the install.wim before starting the Installation.(I found the setup.exe after extracting the boot.wim)

    Read the article

  • see if cd is marked as bootable

    - by Crash893
    I am trying to install xp on a old laptop (compaq presario 700) with a stored verion of xp that i burn to an iso the copy of xp is legit but it won't boot to it. I am able to boot to the crap recover cd that is basically just dos and to a copy of "ultimate boot cd v5" that i just now downloaded and created from the same computer and burner that the xp cd was made from. everything boots but this xp cd. is there a way to tell if its boot-able

    Read the article

  • How to address a recurring low temperature error seen at every boot-up?

    - by GregC
    After updating to latest controller firmware, I started receiving the following error messages: LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7 Is this something I should worry about, or is it a Red Herring? Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >