Search Results

Search found 23127 results on 926 pages for 'based'.

Page 433/926 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Calendar add() vs roll() when do we use it?

    - by Pentium10
    I know add() adds the specified (signed) amount of time to the given time field, based on the calendar's rules. And roll() adds the specified (signed) single unit of time on the given time field without changing larger fields. I can't think of an everyday usage of roll() I would do everything by add(). Can you help me out with examples when do we use roll() and when add()?

    Read the article

  • How can I view the source code for a particular `predict` function?

    - by merlin2011
    Based on the documentation, predict is a polymorphic function in R and a different function is actually called depending on what is passed as the first argument. However, the documentation does not give any information about the names of the functions that predict actually invokes for any particular class. Normally, one could type the name of a function to get its source, but this does not work with predict. If I want to view the source code for the predict function when invoked on objects of the type glmnet, what is the easiest way?

    Read the article

  • Creating a good search solution

    - by Daniel
    I have an app where users have a role,a username,faculty and so on.When I'm looking for a list of users by their role or faculty or anything they have in common I can call (among others possible) @users = User.find_by_role(params[:role]) #or @users = User.find_by_shift(params[:shift]) So it keeps the system Class.find_by_property So the question is: What if at different points users lists should be generated based on different properties.I mean: I'm passing from different links params[:role] or params[:faculty] or params[:department] to my list action in my users controller.As I see it all has to be in that action,but which parameter should the search be made by?

    Read the article

  • % operator for time calculation

    - by Chris
    I am trying to display minutes and seconds based on a number of seconds. I have: float seconds = 200; float mins = seconds / 60.0; float sec = mins % 60.0; [timeIndexLabel setText:[NSString stringWithFormat:@"%.2f , %.2f", mins,seconds]]; But I get an error: invalid operands of types 'float' and 'double' to binary 'operator%' And I don't understand why... Can someone throw me a bone!?

    Read the article

  • Do spaces in your URL (%20) have a negative impact on SEO?

    - by Kevin
    All the articles I Googled on this subject are dated back in 2004-2005. Basically I am structuring precanned searches, and it is based off of categories the client will input. Example content/(term name)/index.htm Does it matter if I used the raw term with a space, which is converted to %20 in the URL, or should I convert the link to '-' and remove that before querying for results? I already have it working, but does anyone know if this definitely has a negative impact on SEO and ranking?

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Gmail and Live are making all messages from my server as spam.

    - by Ryan Kearney
    I'm getting very weird results here. When my server sends an email to my @hotmail or @gmail account, it's marked as spam. When I send email through my server from Outlook to @hotmail, it doesn't get marked as spam, but it still gets marked as spam in gmail. They seem to get through fine on Yahoo though. My servers hostname A record points to an IP address whose PTR record points back to the same domain name. The TXT record has a SPF record in it to allow email to be sent from that servers IP. I moved from a VPS to a Dedicated server when this started to happen. From what I can see, the email headers are identical. Here's one of my email headers that gmail marks as spam. Some fields were repalced. MYGMAILACCOUNT is the email address of the account the email was addressed to. USER is the name of the account on the system it was sent from HOSTNAME is the servers FQDN IPADDR is the IP Address of the Hostname MYDOMAIN is my domain name Delivered-To: MYGMAILACCOUNT Received: by 10.220.77.82 with SMTP id f18cs263483vck; Sat, 27 Feb 2010 23:58:02 -0800 (PST) Received: by 10.150.16.4 with SMTP id 4mr3886702ybp.110.1267343881628; Sat, 27 Feb 2010 23:58:01 -0800 (PST) Return-Path: <USER@HOSTNAME> Received: from HOSTNAME (HOSTNAME [IPADDR]) by mx.google.com with ESMTP id 17si4604419yxe.134.2010.02.27.23.58.01; Sat, 27 Feb 2010 23:58:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of USER@HOSTNAME designates IPADDR as permitted sender) client-ip=IPADDR; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of USER@HOSTNAME designates IPADDR as permitted sender) smtp.mail=USER@HOSTNAME Received: from USER by HOSTNAME with local (Exim 4.69) (envelope-from <USER@HOSTNAME>) id 1Nle2K-0000t8-Bd for MYGMAILACCOUNT; Sun, 28 Feb 2010 02:57:36 -0500 To: Ryan Kearney <MYGMAILACCOUNT> Subject: [Email Subject] MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: webmaster@MYDOMAIN Message-Id: <E1Nle2K-0000t8-Bd@HOSTNAME> Sender: <USER@HOSTNAME> Date: Sun, 28 Feb 2010 02:57:36 -0500 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - HOSTNAME X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [503 500] / [47 12] X-AntiAbuse: Sender Address Domain - HOSTNAME Anyone have any ideas as to why all mail leaving my server gets marked as spam? EDIT: I already used http://www.mxtoolbox.com/SuperTool.aspx to check if my servers IP's are blacklisted and they are in fact not. That's what I thought at first, but it isn't the case. Update Mar 1, 2010 I received the following email from Microsoft Thank you for writing to Windows Live Hotmail Domain Support. My name is * and I will be assisting you today. We have identified that messages from your IP are being filtered based on the recommendations of the SmartScreen filter. This is the spam filtering technology developed and operated by Microsoft and is built around the technology of machine learning. It learns to recognize what is and isn't spam. In short, we filter incoming emails that look like spam. I am not able to go into any specific details about what these filters specifically entail, as this would render them useless. E-mails from IPs are filtered based upon a combination of IP reputation and the content of individual emails. The reputation of an IP is influenced by a number of factors. Among these factors, which you as a sender can control, are: The IP's Junk Mail Reporting complaint rate The frequency and volume in which email is sent The number of spam trap account hits The RCPT success rate So I'm guessing it has to do with the fact that I got an IP address with little or no history in sending email. I've confirmed that I'm not on any blacklists. I'm guessing it's one of those things that will work itself out in a month or so. I'll post when I hear more.

    Read the article

  • How do I achieve lossless JPEG joining without truncation of partial MCUs?

    - by Karan
    I am working on a project for which I need to join thousands of JPEG images losslessly (I'm not talking about the Lossless JPEG/JPEG 2000/JPEG-LS formats here). Aforementioned images have varying levels of chroma subsampling (1x1, 1x2, 2x1, 2x2), resulting in varying MCU sizes (8x8, 8x16, 16x8, 16x16 px). However, in any given set of images to be joined together, each image has identical characteristics. For now, let's assume I only have 2 images. Image #1 (I1) is 256x256px in size and #2 (I2) is 239x256px in size. 2x2 subsampling is used such that MCU size is 16x16px. I2 thus obviously has partial MCUs at the right edge, since its width is not evenly divisible by 16. (I've read that so-called 'partial' MCUs actually contain the data for a complete MCU, but the image dimensions instruct the renderer to only display the relevant pixels and ignore/hide the extra ones.) Looking around for tools that could help me accomplish this, I came across a modified version of JpegTran, that contains an experimental lossless crop 'n' drop (cut & paste) feature. All the other apps I encountered that support lossless JPEG editing seem to utilise IJG's (JpegTran) code, so this seemed to be the logical choice. Also, given the sheer number of images, I wanted something that could preferably be run from the command-line so that I could automate the process with a script. Unfortunately, while everything else worked fine, it seems JpegTran truncates the partial MCUs instead of retaining them. Thus in the example above, the final joined image contains all of I1, but only 224x256px of I2. Why 224? because 239 = 14x16+15, which means there are 14 full MCUs along the width, and 1 partial MCU (just 1px short of the complete 16px). The last 15px is what is getting blanked, leading to a 495x256px image with 15px of blank (grey) pixels at the right edge. See images below (shame that imgur re-compresses them): (left )+ (right) = As you can clearly see, the red portion (15px) of I2 has been truncated by JpegTran. If the MCUs were 8px in width, the lost portion would have been the right-most 7px of I2. Similarly, joining I3 (256x239px) *below * I1 would cause the loss of 7 or 15px, depending on the MCU height of course: (top) + (bottom) = If this is better suited to some other StackExchange (or even non-SE) site/forum where JPEG/image encoding experts hang out, do let me know. Can what I am attempting even be done, or is the so-called 'lossless' JPEG crop 'n' drop only valid for images with no partial MCUs? (Maybe that is why the feature is still in an "experimental state" more than a decade after being introduced...) Until I know for sure that it is impossible, I am not interested in suggestions for lossy joining. Avoiding any generational loss whatsoever is the sole reason why I'm breaking my head over this, else I'd have had this done and dusted ages ago. Also, I am not interested in suggestions related to switching image formats. I do not control the source of the images. If it can be done, how? Please keep in mind that any alternate apps suggested must ideally be capable of automation, given the requirements stated above. (But given how it's unlikely I'm even going to receive a useful answer given the constraints, I would be happy with any app suggestion just as long as it actually works. I can always look into an AutoIT/AHK script or something later to automate it.) I understand that an odd-sized final image might cause issues, so I am fully prepared to accept any solution, even if it results in blank (preferably black) padding pixels to the right/bottom. What I mean is, I don't care if I1 + I2 is 496x256px (1px padding) or even 512x256px (17px padding) in size, as long as the final image contains all the actual image data from both source images, and the entire process is lossless. Obviously the lesser the padding (if any), the better, but at this point any solution will do. A Windows-based solution would be perfect, but a Linux-based one would be entirely acceptable.

    Read the article

  • How Do I Enable My Ubuntu Server To Host Various SSL-Enabled Websites?

    - by Andy Ibanez
    Actually, I Have looked around for a few hours now, but I can't get this to work. The main problem I'm having is that only one out of two sites works. I have my website which will mostly be used for an app. It's called atajosapp.com . atajosapp.com will have three main sites: www.atajosapp.com <- Homepage for the app. auth.atajosapp.com <- Login endpoint for my API (needs SSL) api.atajosapp.com <- Main endpoint for my API (needs SSL). If you attempt to access api.atajosapp.com it works. It will throw you a 403 error and a JSON output, but that's fully intentional. If you try to access auth.atajosapp.com however, the site simply doesn't load. Chrome complains with: The webpage at https://auth.atajosapp.com/ might be temporarily down or it may have moved permanently to a new web address. Error code: ERR_TUNNEL_CONNECTION_FAILED But the website IS there. If you try to access www.atajosapp.com or any other HTTP site, it connects fine. It just doesn't like dealing with more than one HTTPS websites, it seems. The VirtualHost for api.atajosapp.com looks like this: <VirtualHost *:443> DocumentRoot /var/www/api.atajosapp.com ServerName api.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> auth.atajosapp.com Looks very similar: <VirtualHost *:443> DocumentRoot /var/www/auth.atajosapp.com ServerName auth.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> Now I have found many websites that talk about possible solutions. At first, I was getting a message like this: _default_ VirtualHost overlap on port 443, the first has precedence But after googling for hours, I managed to solve it by editing both apache2.conf and ports.conf. This is the last thing I added to ports.conf: <IfModule mod_ssl.c> NameVirtualHost *:443 # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here NameVirtualHost *:443 Listen 443 </IfModule> Still, right now only api.atajosapp.com and www.atajosapp.com are working. I still can't access auth.atajosapp.com. When I check the error log, I see this: Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I don't know what else to do to make both sites work fine on this. I purchased a Wildcard SSL certificate from Comodo that supposedly secures *.atajosapp.com, so after hours trying and googling, I don't know what's wrong anymore. Any help will be really appreciated. EDIT: I just ran the apachectl -t -D DUMP_VHOSTS command and this is the output. Can't make much sense of it...: root@atajosapp:/# apachectl -t -D DUMP_VHOSTS apache2: Could not reliably determine the server's fully qualified domain name, using atajosapp.com for ServerName [Thu Nov 07 02:01:24 2013] [warn] NameVirtualHost *:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost auth.atajosapp.com (/etc/apache2/sites-enabled/auth.atajosapp.com:1) *:80 is a NameVirtualHost default server atajosapp.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost atajosapp.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • AMD 24 core server memory bandwidth

    - by ntherning
    I need some help to determine whether the memory bandwidth I'm seeing under Linux on my server is normal or not. Here's the server spec: HP ProLiant DL165 G7 2x AMD Opteron 6164 HE 12-Core 40 GB RAM (10 x 4GB DDR1333) Debian 6.0 Using mbw on this server I get the following numbers: foo1:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.58047 MiB: 1024.00000 Copy: 1764.082 MiB/s 1 Method: MEMCPY Elapsed: 0.58012 MiB: 1024.00000 Copy: 1765.152 MiB/s 2 Method: MEMCPY Elapsed: 0.58010 MiB: 1024.00000 Copy: 1765.201 MiB/s AVG Method: MEMCPY Elapsed: 0.58023 MiB: 1024.00000 Copy: 1764.811 MiB/s 0 Method: DUMB Elapsed: 0.36174 MiB: 1024.00000 Copy: 2830.778 MiB/s 1 Method: DUMB Elapsed: 0.35869 MiB: 1024.00000 Copy: 2854.817 MiB/s 2 Method: DUMB Elapsed: 0.35848 MiB: 1024.00000 Copy: 2856.481 MiB/s AVG Method: DUMB Elapsed: 0.35964 MiB: 1024.00000 Copy: 2847.310 MiB/s 0 Method: MCBLOCK Elapsed: 0.23546 MiB: 1024.00000 Copy: 4348.860 MiB/s 1 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.230 MiB/s 2 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.359 MiB/s AVG Method: MCBLOCK Elapsed: 0.23545 MiB: 1024.00000 Copy: 4349.149 MiB/s On one of my other servers (based on Intel Xeon E3-1270): foo2:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.18960 MiB: 1024.00000 Copy: 5400.901 MiB/s 1 Method: MEMCPY Elapsed: 0.18922 MiB: 1024.00000 Copy: 5411.690 MiB/s 2 Method: MEMCPY Elapsed: 0.18944 MiB: 1024.00000 Copy: 5405.491 MiB/s AVG Method: MEMCPY Elapsed: 0.18942 MiB: 1024.00000 Copy: 5406.024 MiB/s 0 Method: DUMB Elapsed: 0.14838 MiB: 1024.00000 Copy: 6901.200 MiB/s 1 Method: DUMB Elapsed: 0.14818 MiB: 1024.00000 Copy: 6910.561 MiB/s 2 Method: DUMB Elapsed: 0.14820 MiB: 1024.00000 Copy: 6909.628 MiB/s AVG Method: DUMB Elapsed: 0.14825 MiB: 1024.00000 Copy: 6907.127 MiB/s 0 Method: MCBLOCK Elapsed: 0.04362 MiB: 1024.00000 Copy: 23477.623 MiB/s 1 Method: MCBLOCK Elapsed: 0.04262 MiB: 1024.00000 Copy: 24025.151 MiB/s 2 Method: MCBLOCK Elapsed: 0.04258 MiB: 1024.00000 Copy: 24048.849 MiB/s AVG Method: MCBLOCK Elapsed: 0.04294 MiB: 1024.00000 Copy: 23847.599 MiB/s For reference here's what I get on my Intel based laptop: laptop:~$ mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.40566 MiB: 1024.00000 Copy: 2524.269 MiB/s 1 Method: MEMCPY Elapsed: 0.38458 MiB: 1024.00000 Copy: 2662.638 MiB/s 2 Method: MEMCPY Elapsed: 0.38876 MiB: 1024.00000 Copy: 2634.043 MiB/s AVG Method: MEMCPY Elapsed: 0.39300 MiB: 1024.00000 Copy: 2605.600 MiB/s 0 Method: DUMB Elapsed: 0.30707 MiB: 1024.00000 Copy: 3334.745 MiB/s 1 Method: DUMB Elapsed: 0.30425 MiB: 1024.00000 Copy: 3365.653 MiB/s 2 Method: DUMB Elapsed: 0.30342 MiB: 1024.00000 Copy: 3374.849 MiB/s AVG Method: DUMB Elapsed: 0.30491 MiB: 1024.00000 Copy: 3358.328 MiB/s 0 Method: MCBLOCK Elapsed: 0.07875 MiB: 1024.00000 Copy: 13003.670 MiB/s 1 Method: MCBLOCK Elapsed: 0.08374 MiB: 1024.00000 Copy: 12228.034 MiB/s 2 Method: MCBLOCK Elapsed: 0.07635 MiB: 1024.00000 Copy: 13411.216 MiB/s AVG Method: MCBLOCK Elapsed: 0.07961 MiB: 1024.00000 Copy: 12862.006 MiB/s So according to mbw my laptop is 3 times faster than the server!!! Please help me explain this. I've also tried to mount a ram disk and use dd to benchmark it and I get similar differences so I don't think mbw is to blame. I've checked the BIOS settings and the memory seem to be running at full speed. According to the hosting company the modules are all OK. Could this have something to do with NUMA? It seems like Node Interleaving is disabled on this server. Will enabling it (thus turning off NUMA) make a difference? foo1:~# numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 node 0 size: 8190 MB node 0 free: 7898 MB node 1 cpus: 6 7 8 9 10 11 node 1 size: 12288 MB node 1 free: 12073 MB node 2 cpus: 18 19 20 21 22 23 node 2 size: 12288 MB node 2 free: 12034 MB node 3 cpus: 12 13 14 15 16 17 node 3 size: 8192 MB node 3 free: 8032 MB node distances: node 0 1 2 3 0: 10 20 20 20 1: 20 10 20 20 2: 20 20 10 20 3: 20 20 20 10

    Read the article

  • Scenarios for Bazaar and SVN interaction

    - by Adam Badura
    At our company we are using SVN repository. I'm doing programming from both work (main place) and home (mostly experiments and refactoring). Those are two different machines, in different networks and almost never turned on at the same time (after all I'm either at work or at home...) I wanted to give a chance to some distributed version control system and solve some of the issues associated with SVN based process and having two machines. From git, Mercurial and Bazaar I chose to start with Bazaar since it claims that it is designed do be used by human beings. Its my first time with distributed system and having nice and easy user interface was important for me. Features I wanted to achieve were: Being able to update from SVN repository and commit to it. Being able to commit locally steps of my work on a task. Being able to have few separate tasks at the same time in their own local branches. Being able to share those branches between my work and home computer. As a means of transport between work and home computer I wanted to use a pen-drive. Company server will not work since I may not instal there anything. Neither will work a web service repository as I may not upload source code to web (especially if it would be public which seems to be a common case in free web services). This transport should be Bazaar-based (or what ever else I will end with) so it can be done more or less automatically but manual copying and pasting some folders or generating patch files (providing they would work - I have bad experience with patch files in SVN) would work as well if there is no better solution. Yet the pen-drive should only be used for transportation. I do not want to edit or build there. I tried following Bazaar guidelines for integration with SVN. But I failed. I tried both bzr svn-import and bzr checkout providing URL from my repository as both https://... and svn+https://.... In some cases it had some issues with certificates but the output specified argument to ignore them so I did that. Sometimes it asked me to log in (in other cases maybe it remembered... I don't know) which I did. All were running very slow (this could be our server issue) and at some point were interrupted due to connection interruption (this almost for sure is our server issue: it truncates the connection after some time). But since (as opposed to SVN) restarting starts a new rather than from point where it was interrupted I was unable to reach all the ~19000 revisions (ending usually somewhere around 150). What and how should I do with Bazaar? Is is possible to somehow import SVN repository from the local checkout (so that I do not suffer the connection truncation)? I was told that a colleague that used to work with us has done something similar (importing SVN repository with full history) with Mercurial like in no time. So I'm seriously considering now trying Mercurial even if only to see if that will work. But also what are your general guidelines to achieve the listed features?

    Read the article

  • Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • Squid - Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Can't get IPv6 address using radvd on wireless interface on windows, wired works fine.

    - by AndrejaKo
    I set up my router to use OpenWRT and set it up to use IPv6 using a tunnel form SixXs. I'm having problems with stateless autoconfiguration using radvd. On my computer, wired connection can get its IPv6 address fine, but wireless can't. After spending some time on OpenWRT forums, I'm pretty much sure now that router is set up fine and that the problem is with my windows settings. Also, I do not have any problems getting IPv6 address on openSUSE 11.3. So what should I do to solve this and what information should I post? Here's radvdump output for wired interface: interface br-lan { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag on; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvSourceLLAddress on; prefix 2001:15c0:67d0::/64 { AdvValidLifetime 86400; AdvPreferredLifetime 14400; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition }; # End of interface definition Here's radvdump output for wireless interface: # # radvd configuration generated by radvdump 1.6 # based on Router Advertisement from fe80::a0b7:deff:fef0:5b34 # received by interface br-lan # interface br-lan { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag on; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvSourceLLAddress on; prefix 2001:15c0:67d0::/64 { AdvValidLifetime 86400; AdvPreferredLifetime 14400; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition }; # End of interface definition # # radvd configuration generated by radvdump 1.6 # based on Router Advertisement from fe80::a0b7:deff:fef0:5b34 # received by interface br-lan # interface br-lan { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag on; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvSourceLLAddress on; prefix 2001:15c0:67d0::/64 { AdvValidLifetime 86400; AdvPreferredLifetime 14400; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition }; # End of interface definition

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >