Search Results

Search found 31586 results on 1264 pages for 'custom result'.

Page 279/1264 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • singledispatch in class, how to dispatch self type

    - by yanxinyou
    Using python3.4. Here I want use singledispatch to dispatch different type in __mul__ method . The code like this : class Vector(object): ## some code not paste @functools.singledispatch def __mul__(self, other): raise NotImplementedError("can't mul these type") @__mul__.register(int) @__mul__.register(object) # Becasue can't use Vector , I have to use object def _(self, other): result = Vector(len(self)) # start with vector of zeros for j in range(len(self)): result[j] = self[j]*other return result @__mul__.register(Vector) # how can I use the self't type @__mul__.register(object) # def _(self, other): pass # need impl As you can see the code , I want support Vector*Vertor , This has Name error Traceback (most recent call last): File "p_algorithms\vector.py", line 6, in <module> class Vector(object): File "p_algorithms\vector.py", line 84, in Vector @__mul__.register(Vector) # how can I use the self't type NameError: name 'Vector' is not defined The question may be How Can I use class Name a Type in the class's method ? I know c++ have font class statement . How python solve my problem ? And it is strange to see result = Vector(len(self)) where the Vector can be used in method body .

    Read the article

  • Special folders & functionality in Windows [closed]

    - by lloydsparkes
    I am using the new Virtual PC Beta on Windows 7 and I saw: The virtual machines have a special directory, but once you are in it, the VMs are shown. How can I do the same thing, with custom headings ("Machine status", "Memory", etc.) and custom toolbar buttons ("Create virtual machine")? I can't seem to find any documentation for this.

    Read the article

  • Binding one dependency property to another

    - by Gregory Dodd
    I have a custom Tab Control that I have created, but I am having an issue. I have an Editable TextBox as part of the custom TabControl View. <Controls:EditableTextControl x:Name="PageTypeName" Style="{StaticResource ResourceKey={x:Type Controls:EditableTextControl}}" Grid.Row="0" TabIndex="0" Uid="0" AutomationProperties.AutomationId="PageTypeNameTextBox" AutomationProperties.Name="PageTypeName" Visibility="{Binding ElementName=PageTabControl,Path=ShowPageType}"> <Controls:EditableTextControl.ContextMenu> <ContextMenu x:Name="TabContextMenu"> <MenuItem Header="Rename Page Type" Command="{Binding Path=PlacementTarget.EnterEditMode, RelativeSource={RelativeSource AncestorType=ContextMenu}}" AutomationProperties.AutomationId="RenamePageTypeMenuItem" AutomationProperties.Name="RenamePageType"/> <MenuItem Header="Delete Page Type" Command="{Binding Path=PageTypeDeletedCommand}" AutomationProperties.AutomationId="DeletePageTypeMenuItem" AutomationProperties.Name="DeletePageType"/> </ContextMenu> </Controls:EditableTextControl.ContextMenu> <Controls:EditableTextControl.Content> <!--<Binding Path="CurrentPageTypeViewModel.Name" Mode="TwoWay"/>--> <Binding ElementName="PageTabControl" Path="CurrentPageTypeName" Mode ="TwoWay"/> </Controls:EditableTextControl.Content> </Controls:EditableTextControl> In the Content section I am binding to a Dependency Prop called CurrentPageTypeName. This Depedency prop is part of this custom Tab Control. public static DependencyProperty CurrentPageTypeNameProperty = DependencyProperty.Register("CurrentPageTypeName", typeof(object), typeof(TabControlView)); public object CurrentPageTypeName { get { return GetValue(CurrentPageTypeNameProperty) as object; } set { SetValue(CurrentPageTypeNameProperty, value); } } In another view, where I am using the custom TabControl I then bind my property, with the actual name value, to CurrentPageTypeName property as seen below: <Views:TabControlView Grid.Row="0" Name="RunPageTabControl" TabItemsSource="{Binding RunPageTypeViewModels}" SelectedTab="{Binding Converter={StaticResource debugConverter}}" CurrentPageTypeName="{Binding Path=RunPageName, Mode=TwoWay}" TabContentTemplateSelector="{StaticResource tabItemTemplateSelector}" SelectedIndex="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.SelectedTabIndex}" ShowPageType="Hidden" > <!--<Views:TabControlView.TabContentTemplate> <DataTemplate DataType="{x:Type ViewModels:RunPageTypeViewModel}"> <RunViews:RunPageTypeView/> </DataTemplate> </Views:TabControlView.TabContentTemplate>--> </Views:TabControlView> My problem is that nothing seems to be happening. It is grabbing its Content from the Itemsource, and not from my chained Dependency props. Is what I am trying even possible? If so, what have I done wrong. Thanks for looking.

    Read the article

  • remove field name from object validation message

    - by Colin G
    I've got a simple active record validation on an object using this within a form: form.error_messages({:message => '', :header_message => ''}) This in turn outputs something like "FieldName My Custom message" What i need to do is remove the field name from the error message but leave my custom message. Can anyone point me in the right direction for this.

    Read the article

  • Error while saving csv file generated by PHP

    - by user1667374
    I have created below script which is saving the csv file in the same path of this script but when I am saving it to different directory, it gives me error. Can somebody please help me on this? This is the error: Warning: fopen(/store/secure/mas/) [function.fopen]: failed to open stream: No such file or directory in /getdetails.php on line 15 Warning: fputs(): supplied argument is not a valid stream resource in /getdetails.php on line 36 Warning: fclose(): supplied argument is not a valid stream resource in /getdetails.php on line 39 This is the script: <?php $MYSQL_HOST="localhost"; $MYSQL_USERNAME="***"; $MYSQL_PASSWORD="***"; $MYSQL_DATABASE="shop"; $MYSQL_TABLE="ds_orders"; mysql_connect( "$MYSQL_HOST", "$MYSQL_USERNAME", "$MYSQL_PASSWORD" ) or die( mysql_error( ) ); mysql_select_db( "$MYSQL_DATABASE") or die( mysql_error( $conn ) ); $filename="Order_Details"; $directory = "/store/secure/mas/"; $csv_filename = $filename.".csv"; $fd = fopen ( "$directory" . $cvs_filename, "w"); $today="2009-03-17"; $disp_date="05122008"; $sql = "SELECT * FROM $MYSQL_TABLE where cDate='$today'"; $result=mysql_query($sql); $rows=mysql_num_rows($result); echo $rows; if(mysql_num_rows($result)>0){ $fileContent="Record ID,Discount Percent\n"; while($data=mysql_fetch_array($result)) { $fileContent.= "".$data['oID'].",".$data['discount']."\n"; } $fileContent=str_replace("\n\n","\n",$fileContent); fputs($fd, $fileContent); } fclose($fd); ?>

    Read the article

  • How to use another type of EditingControl in a single .NET 3.5 DataGridView column ?

    - by too
    Is this possible to have two (or more) different kinds of cells to be displayed interchangeably in single column of C# .Net 3.5 WinForms DataGridView? I know one column has specified single EditingControl type, yet I think grid is flexible enough to do some tricks, I may think of only: Adding as many invisible columns to grid as required types of cells and on CellBeginEdit somehow exchange current cell with other column's cell Creating custom column and custom cell with possibility of changing EditingControl for single cell Which approach is better, is there any other solution, are there any examples ?

    Read the article

  • why my pagination link doesnt appear ?

    - by udaya
    This is my script which i have used to paginate ,,The datas are restricted to 4 but the pagination link doesn't appear <? require_once ('Pager/Pager.php'); $connection = mysql_connect( "localhost" , "root" , "" ); mysql_select_db( "ssit",$connection); $result=mysql_query("SELECT dFrindName FROM tbl_friendslist", $connection); $row = mysql_fetch_array($result); $totalItems = $row['total']; $pager_options = array( 'mode' => 'Sliding', // Sliding or Jumping mode. See below. 'perPage' => 4, // Total rows to show per page 'delta' => 4, // See below 'totalItems' => $totalItems, ); $pager = Pager::factory($pager_options); echo $pager->links; list($from, $to) = $pager->getOffsetByPageId(); $from = $from - 1; $perPage = $pager_options['perPage']; $result = mysql_query("SELECT * FROM tbl_friendslist LIMIT 5 , $perPage",$connection); while($row = mysql_fetch_array($result)) { echo $row['dFrindName'].'</br>'; } ?>

    Read the article

  • PHP/MySQL Problem

    - by Scott
    Why does this only print the sites specific content under the first site, and doesn't do it for the other 2? <?php echo 'NPSIN Data will be here soon!'; // connect to DB $dbhost = 'localhost'; $dbuser = 'root'; $dbpass = 'root'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to DB'); $dbname = 'npsin'; mysql_select_db($dbname); // get number of sites $query = 'select count(*) from sites'; $result = mysql_query($query) or die ('Query failed: ' . mysql_error()); $resultArray = mysql_fetch_array($result); $numSites = $resultArray[0]; echo "<br><br>"; // get all sites $query = 'select site_name from sites'; $result = mysql_query($query); // get site content $query2 = 'select content_name, site_id from content'; $result2 = mysql_query($query2); // get site files // print info $count = 1; while ($row = mysql_fetch_array($result, MYSQL_NUM)) { echo "Site $count: "; echo "$row[0]"; echo "<br>"; $contentCount = 1; while ($row2 = mysql_fetch_array($result2, MYSQL_NUM)) { $id = $row2[1]; if ($id == ($count - 1)) { echo "Content $contentCount: "; echo "$row2[0]"; echo "<br>"; } $contentCount++; } $count++; } ?>

    Read the article

  • C# Strange Behavior

    - by Betamoo
    I have a custom struct : struct A { public int y; } a custom class with empty constuctor: class B { public A a; public B() { } } and here is the main: static void Main(string[] args) { B b = new B(); b.a.y = 5;//No runtime errors! Console.WriteLine(b.a.y); } When I run the above program, it does not give me any errors, although I did not initialize struct A in class B constructor..'a=new A();'

    Read the article

  • login script in php using session variables

    - by kracekumar
    ? username: '; } else { $user=$_POST['user']; $query="select name from login where name='$user'"; $result=mysql_query($query) or die(mysql_error()); $rows=mysql_num_rows($result)or die(mysql_error()); session_start(); //echo "results"; $username=mysql_fetch_array($result) or die(mysql_error()); $fields=mysql_num_fields($result) or die(mysql_error()); echo "username:$username[0]"; $_SESSION['username']=$username[0]; $sesion_username=$_SESSION['username']; //echo "rows:$rows"." "."fields:$fields"; echo""."username:$sesion_username"; } ? i wanted to create a site where user can login and upload their details depending on their user name have stored their details in a database,the problem i am facing is two people 'a' and 'b' login one after other session_variable $_SESSION['username'] will have 'b's' username ,so i can't display 'a's' details now. . . . what i want to achieve is when a user logins in i want to store his name and other details in session variable and navigate him according and display his details

    Read the article

  • Rotating bits of any integer in C

    - by Tim
    Pass a integer 2 to this function and then return a integer which is 4 x = 2; x = rotateInt('L', x, 1); (left shift the bits by 1) Example: 00000010 - rotate left by 1 - 00000100 but if I pass this: x = rotateInt('R', x, 3); it will return 64, 01000000 Here is the code, can someone correct the error... thanks int rotateInt(char direction, unsigned int x, int y) { unsigned int mask = 0; int num = 0, result = 0; int i; for(i = 0; i < y; i++) { if(direction == 'R') { if((x & 1) == 1) x = (x ^ 129); else x = x >> 1; } else if(direction == 'L') { if((x & 128) == 1) x = (x ^ 129); else x = x << 1; } } result = (result ^ x); return result; }

    Read the article

  • j query validation plugin for two fields

    - by jonathan p
    I am using the Jquery Validation plug-in, however i need to add a "custom rule", i have 2 date fields and i need to ensure that the end date is not less than the start date. My problem is how to pass the two fields in as elements. As i understand u set up a custom function something like this : function customValidationMethod(value, element, params){ } But can't see how i could use it with two fields, if anyone has any ideas it would be greatly appreciated.

    Read the article

  • Sending a file to an API - C#

    - by alex
    I'm trying to use an API which sends a fax. I have a PHP example below: (I will be using C# however) <?php //This is example code to send a FAX from the command line using the Simwood API //It is illustrative only and should not be used without the addition of error checking etc. $ch = curl_init("http://url-to-api-endpoint"); $fax_variables=array( 'user'=> 'test', 'password'=> 'test', 'sendat' => '2050-01-01 01:00', 'priority'=> 10, 'output'=> 'json', 'to[0]' => '44123456789', 'to[1]' => '44123456780', 'file[0]'=>'@/tmp/myfirstfile.pdf', 'file[1]' => '@/tmp/mysecondfile.pdf' ); print_r($fax_variables); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($ch, CURLOPT_POSTFIELDS, $fax_variables); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $result=curl_exec ($ch); $info = curl_getinfo($ch); $result['http_code']; curl_close ($ch); print_r($result); ?> My question is - in the C# world, how would I achieve the same result? Do i need to build a post request? Ideally, i was trying to do this using REST - and constructing a URL, and using HttpWebRequest (GET) to call the API

    Read the article

  • Vertical (rotated) label in Android

    - by DroidIn.net
    I need 2 ways of showing vertical label in Android: Horizontal label turned 90 degrees counterclockwise (letters on the side) Horizontal label with letters one under the other (like a store sign) Do I need to develop custom widgets for both cases (one case), can I make TextView to render that way, and what would be a good way to do something like that if I need to go completely custom?

    Read the article

  • [Scala] Using overloaded, typed methods on a collection

    - by stephanos
    I'm quite new to Scala and struggling with the following: I have database objects (type of BaseDoc) and value objects (type of BaseVO). Now there are multiple convert methods (all called 'convert') that take an instance of an object and convert it to the other type accordingly. For example: def convert(doc: ClickDoc): ClickVO = doc match { case null => null case _ => val result = new ClickVO result.x = doc.x result.y = doc.y result } Now I sometimes need to convert a list of objects. How would I do this - I tried: def convert[D <: MyBaseDoc, V <: BaseVO](docs: List[D]):List[V] = docs match { case List() => List() case xs => xs.map(doc => convert(doc)) } Which results in 'overloaded method value convert with alternatives ...'. I tried to add manifest information to it, but couldn't make it work. I couldn't even create one method for each because it'd say that they have the same parameter type after type erasure (List). Ideas welcome!

    Read the article

  • What is the best way to structure those jquery call backs?

    - by user518138
    I am new to jquery and recently have been amazed how powerful those call backs are. But here I got some logic and I am not sure what the best way is. Basically, I got a bunch of tables in web sql database in chrome. I will need to go through say table A, table B and table C. I need to submit each row of the table to server. Each row represents a bunch of complicated logic and data url and they have to be submitted in the order of A - B - C. The regular java way would be: TableA.SubmitToServer() { query table A; foreach(row in tableA.rows) { int nRetID = submitToServer(row); do other updates.... } } Similar for tableB and table C. then just call: TableA.SubmitToServer(); TableB.SubmitToServer(); TableC.SubmitToServer(); //that is very clear and easy. But in JQuery, it probably will be: db.Transaction(function (tx){ var strQuery = "select * from TableA"; tx.executeSql(strQuery, [], function (tx, result){ for(i = 0 ; i < result.rows.length; i++) { submitTableARowToServer(tx, result.rows.getItem(i), function (tx, result) { //do some other related updates based on this row from tableA //also need to upload some related files to server... }); } }, function errorcallback....) }); As you can see, there are already enough nested callbacks. Now, where should I put the process for TableB and tableC? They all need to have similar logic and they can only be called after everything is done from TableA. So, What is the best way to do this in jquery? Thanks

    Read the article

  • How to add Eclipse Task Tags programmatically (Eclipse Plugin development)?

    - by sebnem
    Hi, I am developing an Eclipse Plugin. I want to add my custom Task Tag programmatically within the plugin. (Lets say DOTHIS) Later, i want to list the lines marked with DOTHIS tag in my custom taskView I know that it is done using the Eclipse UI from Project Properties Java Compiler Task Tags New. and then in the task view by Configure Contents but how can i do these arranegments within the plugin? Thanks in advance.

    Read the article

  • Help With Generics? How to Define Generic Method?

    - by DaveDev
    Is it possible to create a generic method with a definition similar to: public static string GenerateWidget<TypeOfHtmlGen, WidgetType>(this HtmlHelper htmlHelper , object modelData) // TypeOfHtmlGenerator is a type that creates custom Html tags. // GenerateWidget creates custom Html tags which contains Html representing the Widget. I can use this method to create any kind of widget contained within any kind of Html tag. Thanks

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • Optimizing AES modes on Solaris for Intel Westmere

    - by danx
    Optimizing AES modes on Solaris for Intel Westmere Review AES is a strong method of symmetric (secret-key) encryption. It is a U.S. FIPS-approved cryptographic algorithm (FIPS 197) that operates on 16-byte blocks. AES has been available since 2001 and is widely used. However, AES by itself has a weakness. AES encryption isn't usually used by itself because identical blocks of plaintext are always encrypted into identical blocks of ciphertext. This encryption can be easily attacked with "dictionaries" of common blocks of text and allows one to more-easily discern the content of the unknown cryptotext. This mode of encryption is called "Electronic Code Book" (ECB), because one in theory can keep a "code book" of all known cryptotext and plaintext results to cipher and decipher AES. In practice, a complete "code book" is not practical, even in electronic form, but large dictionaries of common plaintext blocks is still possible. Here's a diagram of encrypting input data using AES ECB mode: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 What's the solution to the same cleartext input producing the same ciphertext output? The solution is to further process the encrypted or decrypted text in such a way that the same text produces different output. This usually involves an Initialization Vector (IV) and XORing the decrypted or encrypted text. As an example, I'll illustrate CBC mode encryption: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ IV >----->(XOR) +------------->(XOR) +---> . . . . | | | | | | | | \/ | \/ | AESKey-->(AES Encryption) | AESKey-->(AES Encryption) | | | | | | | | | \/ | \/ | CipherTextOutput ------+ CipherTextOutput -------+ Block 1 Block 2 The steps for CBC encryption are: Start with a 16-byte Initialization Vector (IV), choosen randomly. XOR the IV with the first block of input plaintext Encrypt the result with AES using a user-provided key. The result is the first 16-bytes of output cryptotext. Use the cryptotext (instead of the IV) of the previous block to XOR with the next input block of plaintext Another mode besides CBC is Counter Mode (CTR). As with CBC mode, it also starts with a 16-byte IV. However, for subsequent blocks, the IV is just incremented by one. Also, the IV ix XORed with the AES encryption result (not the plain text input). Here's an illustration: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ IV >----->(XOR) IV + 1 >---->(XOR) IV + 2 ---> . . . . | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 Optimization Which of these modes can be parallelized? ECB encryption/decryption can be parallelized because it does more than plain AES encryption and decryption, as mentioned above. CBC encryption can't be parallelized because it depends on the output of the previous block. However, CBC decryption can be parallelized because all the encrypted blocks are known at the beginning. CTR encryption and decryption can be parallelized because the input to each block is known--it's just the IV incremented by one for each subsequent block. So, in summary, for ECB, CBC, and CTR modes, encryption and decryption can be parallelized with the exception of CBC encryption. How do we parallelize encryption? By interleaving. Usually when reading and writing data there are pipeline "stalls" (idle processor cycles) that result from waiting for memory to be loaded or stored to or from CPU registers. Since the software is written to encrypt/decrypt the next data block where pipeline stalls usually occurs, we can avoid stalls and crypt with fewer cycles. This software processes 4 blocks at a time, which ensures virtually no waiting ("stalling") for reading or writing data in memory. Other Optimizations Besides interleaving, other optimizations performed are Loading the entire key schedule into the 128-bit %xmm registers. This is done once for per 4-block of data (since 4 blocks of data is processed, when present). The following is loaded: the entire "key schedule" (user input key preprocessed for encryption and decryption). This takes 11, 13, or 15 registers, for AES-128, AES-192, and AES-256, respectively The input data is loaded into another %xmm register The same register contains the output result after encrypting/decrypting Using SSSE 4 instructions (AESNI). Besides the aesenc, aesenclast, aesdec, aesdeclast, aeskeygenassist, and aesimc AESNI instructions, Intel has several other instructions that operate on the 128-bit %xmm registers. Some common instructions for encryption are: pxor exclusive or (very useful), movdqu load/store a %xmm register from/to memory, pshufb shuffle bytes for byte swapping, pclmulqdq carry-less multiply for GCM mode Combining AES encryption/decryption with CBC or CTR modes processing. Instead of loading input data twice (once for AES encryption/decryption, and again for modes (CTR or CBC, for example) processing, the input data is loaded once as both AES and modes operations occur at in the same function Performance Everyone likes pretty color charts, so here they are. I ran these on Solaris 11 running on a Piketon Platform system with a 4-core Intel Clarkdale processor @3.20GHz. Clarkdale which is part of the Westmere processor architecture family. The "before" case is Solaris 11, unmodified. Keep in mind that the "before" case already has been optimized with hand-coded Intel AESNI assembly. The "after" case has combined AES-NI and mode instructions, interleaved 4 blocks at-a-time. « For the first table, lower is better (milliseconds). The first table shows the performance improvement using the Solaris encrypt(1) and decrypt(1) CLI commands. I encrypted and decrypted a 1/2 GByte file on /tmp (swap tmpfs). Encryption improved by about 40% and decryption improved by about 80%. AES-128 is slighty faster than AES-256, as expected. The second table shows more detail timings for CBC, CTR, and ECB modes for the 3 AES key sizes and different data lengths. » The results shown are the percentage improvement as shown by an internal PKCS#11 microbenchmark. And keep in mind the previous baseline code already had optimized AESNI assembly! The keysize (AES-128, 192, or 256) makes little difference in relative percentage improvement (although, of course, AES-128 is faster than AES-256). Larger data sizes show better improvement than 128-byte data. Availability This software is in Solaris 11 FCS. It is available in the 64-bit libcrypto library and the "aes" Solaris kernel module. You must be running hardware that supports AESNI (for example, Intel Westmere and Sandy Bridge, microprocessor architectures). The easiest way to determine if AES-NI is available is with the isainfo(1) command. For example, $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this software. Solaris libraries and kernel automatically determine if it's running on AESNI-capable machines and execute the correctly-tuned software for the current microprocessor. Summary Maximum throughput of AES cipher modes can be achieved by combining AES encryption with modes processing, interleaving encryption of 4 blocks at a time, and using Intel's wide 128-bit %xmm registers and instructions. References "Block cipher modes of operation", Wikipedia Good overview of AES modes (ECB, CBC, CTR, etc.) "Advanced Encryption Standard", Wikipedia "Current Modes" describes NIST-approved block cipher modes (ECB,CBC, CFB, OFB, CCM, GCM)

    Read the article

  • ASP.NET WebAPI Security 5: JavaScript Clients

    - by Your DisplayName here!
    All samples I showed in my last post were in C#. Christian contributed another client sample in some strange language that is supposed to work well in browsers ;) JavaScript client scenarios There are two fundamental scenarios when it comes to JavaScript clients. The most common is probably that the JS code is originating from the same web application that also contains the web APIs. Think a web page that does some AJAX style callbacks to an API that belongs to that web app – Validation, data access etc. come to mind. Single page apps often fall in that category. The good news here is that this scenario just works. The typical course of events is that the user first logs on to the web application – which will result in an authentication cookie of some sort. That cookie will get round-tripped with your AJAX calls and ASP.NET does its magic to establish a client identity context. Since WebAPI inherits the security context from its (web) host, the client identity is also available here. The other fundamental scenario is JavaScript code *not* running in the context of the WebAPI hosting application. This is more or less just like a normal desktop client – either running in the browser, or if you think of Windows 8 Metro style apps as “real” desktop apps. In that scenario we do exactly the same as the samples did in my last post – obtain a token, then use it to call the service. Obtaining a token from IdentityServer’s resource owner credential OAuth2 endpoint could look like this: thinktectureIdentityModel.BrokeredAuthentication = function (stsEndpointAddress, scope) {     this.stsEndpointAddress = stsEndpointAddress;     this.scope = scope; }; thinktectureIdentityModel.BrokeredAuthentication.prototype = function () {     getIdpToken = function (un, pw, callback) {         $.ajax({             type: 'POST',             cache: false,             url: this.stsEndpointAddress,             data: { grant_type: "password", username: un, password: pw, scope: this.scope },             success: function (result) {                 callback(result.access_token);             },             error: function (error) {                 if (error.status == 401) {                     alert('Unauthorized');                 }                 else {                     alert('Error calling STS: ' + error.responseText);                 }             }         });     };     createAuthenticationHeader = function (token) {         var tok = 'IdSrv ' + token;         return tok;     };     return {         getIdpToken: getIdpToken,         createAuthenticationHeader: createAuthenticationHeader     }; } (); Calling the service with the requested token could look like this: function getIdentityClaimsFromService() {     authHeader = authN.createAuthenticationHeader(token);     $.ajax({         type: 'GET',         cache: false,         url: serviceEndpoint,         beforeSend: function (req) {             req.setRequestHeader('Authorization', authHeader);         },         success: function (result) {              $.each(result.Claims, function (key, val) {                 $('#claims').append($('<li>' + val.Value + '</li>'))             });         },         error: function (error) {             alert('Error: ' + error.responseText);         }     }); I updated the github repository, you can can play around with the code yourself.

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >