Search Results

Search found 61651 results on 2467 pages for 'function object'.

Page 352/2467 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Any Other Ideas for prototyping..

    - by davehamptonusa
    I've used Douglass Crockford's Object.beget, but augmented it slightly to: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { if (spec.hasOwnProperty(node)) { that[node] = spec[node]; } } return that; }; This way you can "beget" and augment in one fell swoop. var fop = Object.spawn(bar, { a: 'fast', b: 'prototyping' }); In English that means, "Make me a new object called 'fop' with 'bar' as its prototype, but change or add the members 'a' and 'b'. You can even nest it the spec to prototype deeper elements, should you choose. var fop = Object.spawn(bar, { a: 'fast', b: Object.spawn(quux,{ farple: 'deep' }), c: 'prototyping' }); This can help avoid hopping into an object's prototype unintentionally in a long object name like: foo.bar.quux.peanut = 'farple'; If quux is part of the prototype and not foo's own object, your change to 'peanut' will actually change the protoype, affecting all objects prototyped by foo's prototype object. But I digress... My question is this. Because your spec can itself be another object and that object could itself have properties from it's prototype in your new object - and you may want those properties...(at least you should be aware of them before you decided to use it as a spec)... I want to be able to grab all of the elements from all of the spec's prototype chain, except for the prototype object itself... This would flatten them into the new object. Should I use: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { that[node] = spec[node]; } that.prototype = o; return that; }; I would love thoughts and suggestions...

    Read the article

  • Writing Unit Tests for an ASP.NET MVC Action Method that handles Ajax Request and Normal Request

    - by shiju
    In this blog post, I will demonstrate how to write unit tests for an ASP.NET MVC action method, which handles both Ajax request and normal HTTP Request. I will write a unit test for specifying the behavior of an Ajax request and will write another unit test for specifying the behavior of a normal HTTP request. Both Ajax request and normal request will be handled by a single action method. So the ASP.NET MVC action method will be execute HTTP Request object’s IsAjaxRequest method for identifying whether it is an Ajax request or not. So we have to create mock object for Request object and also have to make as a Ajax request from the unit test for verifying the behavior of an Ajax request. I have used NUnit and Moq for writing unit tests. Let me write a unit test for a Ajax request Code Snippet [Test] public void Index_AjaxRequest_Returns_Partial_With_Expense_List() {     // Arrange       Mock<HttpRequestBase> request = new Mock<HttpRequestBase>();     Mock<HttpResponseBase> response = new Mock<HttpResponseBase>();     Mock<HttpContextBase> context = new Mock<HttpContextBase>();       context.Setup(c => c.Request).Returns(request.Object);     context.Setup(c => c.Response).Returns(response.Object);     //Add XMLHttpRequest request header     request.Setup(req => req["X-Requested-With"]).         Returns("XMLHttpRequest");       IEnumerable<Expense> fakeExpenses = GetMockExpenses();     expenseRepository.Setup(x => x.GetMany(It.         IsAny<Expression<Func<Expense, bool>>>())).         Returns(fakeExpenses);     ExpenseController controller = new ExpenseController(         commandBus.Object, categoryRepository.Object,         expenseRepository.Object);     controller.ControllerContext = new ControllerContext(         context.Object, new RouteData(), controller);     // Act     var result = controller.Index(null, null) as PartialViewResult;     // Assert     Assert.AreEqual("_ExpenseList", result.ViewName);     Assert.IsNotNull(result, "View Result is null");     Assert.IsInstanceOf(typeof(IEnumerable<Expense>),             result.ViewData.Model, "Wrong View Model");     var expenses = result.ViewData.Model as IEnumerable<Expense>;     Assert.AreEqual(3, expenses.Count(),         "Got wrong number of Categories");         }   In the above unit test, we are calling Index action method of a controller named ExpenseController, which will returns a PartialView named _ExpenseList, if it is an Ajax request. We have created mock object for HTTPContextBase and setup XMLHttpRequest request header for Request object’s X-Requested-With for making it as a Ajax request. We have specified the ControllerContext property of the controller with mocked object HTTPContextBase. Code Snippet controller.ControllerContext = new ControllerContext(         context.Object, new RouteData(), controller); Let me write a unit test for a normal HTTP method Code Snippet [Test] public void Index_NormalRequest_Returns_Index_With_Expense_List() {     // Arrange               Mock<HttpRequestBase> request = new Mock<HttpRequestBase>();     Mock<HttpResponseBase> response = new Mock<HttpResponseBase>();     Mock<HttpContextBase> context = new Mock<HttpContextBase>();       context.Setup(c => c.Request).Returns(request.Object);     context.Setup(c => c.Response).Returns(response.Object);       IEnumerable<Expense> fakeExpenses = GetMockExpenses();       expenseRepository.Setup(x => x.GetMany(It.         IsAny<Expression<Func<Expense, bool>>>())).         Returns(fakeExpenses);     ExpenseController controller = new ExpenseController(         commandBus.Object, categoryRepository.Object,         expenseRepository.Object);     controller.ControllerContext = new ControllerContext(         context.Object, new RouteData(), controller);     // Act     var result = controller.Index(null, null) as ViewResult;     // Assert     Assert.AreEqual("Index", result.ViewName);     Assert.IsNotNull(result, "View Result is null");     Assert.IsInstanceOf(typeof(IEnumerable<Expense>),             result.ViewData.Model, "Wrong View Model");     var expenses = result.ViewData.Model         as IEnumerable<Expense>;     Assert.AreEqual(3, expenses.Count(),         "Got wrong number of Categories"); }   In the above unit test, we are not specifying the XMLHttpRequest request header for Request object’s X-Requested-With, so that it will be normal HTTP Request. If this is a normal request, the action method will return a ViewResult with a view template named Index. The below is the implementation of Index action method Code Snippet public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last date     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year,             startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseRepository.GetMany(         exp => exp.Date >= startDate && exp.Date <= endDate);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("_ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View("Index",expenses); }   The index action method will returns a PartialView named _ExpenseList, if it is an Ajax request and will returns a View named Index if it is a normal request. Source Code The source code has been taken from my EFMVC app which can download from here

    Read the article

  • Anatomy of a .NET Assembly - Custom attribute encoding

    - by Simon Cooper
    In my previous post, I covered how field, method, and other types of signatures are encoded in a .NET assembly. Custom attribute signatures differ quite a bit from these, which consequently affects attribute specifications in C#. Custom attribute specifications In C#, you can apply a custom attribute to a type or type member, specifying a constructor as well as the values of fields or properties on the attribute type: public class ExampleAttribute : Attribute { public ExampleAttribute(int ctorArg1, string ctorArg2) { ... } public Type ExampleType { get; set; } } [Example(5, "6", ExampleType = typeof(string))] public class C { ... } How does this specification actually get encoded and stored in an assembly? Specification blob values Custom attribute specification signatures use the same building blocks as other types of signatures; the ELEMENT_TYPE structure. However, they significantly differ from other types of signatures, in that the actual parameter values need to be stored along with type information. There are two types of specification arguments in a signature blob; fixed args and named args. Fixed args are the arguments to the attribute type constructor, named arguments are specified after the constructor arguments to provide a value to a field or property on the constructed attribute type (PropertyName = propValue) Values in an attribute blob are limited to one of the basic types (one of the number types, character, or boolean), a reference to a type, an enum (which, in .NET, has to use one of the integer types as a base representation), or arrays of any of those. Enums and the basic types are easy to store in a blob - you simply store the binary representation. Strings are stored starting with a compressed integer indicating the length of the string, followed by the UTF8 characters. Array values start with an integer indicating the number of elements in the array, then the item values concatentated together. Rather than using a coded token, Type values are stored using a string representing the type name and fully qualified assembly name (for example, MyNs.MyType, MyAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0123456789abcdef). If the type is in the current assembly or mscorlib then just the type name can be used. This is probably done to prevent direct references between assemblies solely because of attribute specification arguments; assemblies can be loaded in the reflection-only context and attribute arguments still processed, without loading the entire assembly. Fixed and named arguments Each entry in the CustomAttribute metadata table contains a reference to the object the attribute is applied to, the attribute constructor, and the specification blob. The number and type of arguments to the constructor (the fixed args) can be worked out by the method signature referenced by the attribute constructor, and so the fixed args can simply be concatenated together in the blob without any extra type information. Named args are different. These specify the value to assign to a field or property once the attribute type has been constructed. In the CLR, fields and properties can be overloaded just on their type; different fields and properties can have the same name. Therefore, to uniquely identify a field or property you need: Whether it's a field or property (indicated using byte values 0x53 and 0x54, respectively) The field or property type The field or property name After the fixed arg values is a 2-byte number specifying the number of named args in the blob. Each named argument has the above information concatenated together, mostly using the basic ELEMENT_TYPE values, in the same way as a method or field signature. A Type argument is represented using the byte 0x50, and an enum argument is represented using the byte 0x55 followed by a string specifying the name and assembly of the enum type. The named argument property information is followed by the argument value, using the same encoding as fixed args. Boxed objects This would be all very well, were it not for object and object[]. Arguments and properties of type object allow a value of any allowed argument type to be specified. As a result, more information needs to be specified in the blob to interpret the argument bytes as the correct type. So, the argument value is simple prepended with the type of the value by specifying the ELEMENT_TYPE or name of the enum the value represents. For named arguments, a field or property of type object is represented using the byte 0x51, with the actual type specified in the argument value. Some examples... All property signatures start with the 2-byte value 0x0001. Similar to my previous post in the series, names in capitals correspond to a particular byte value in the ELEMENT_TYPE structure. For strings, I'll simply give the string value, rather than the length and UTF8 encoding in the actual blob. I'll be using the following enum and attribute types to demonstrate specification encodings: class AttrAttribute : Attribute { public AttrAttribute() {} public AttrAttribute(Type[] tArray) {} public AttrAttribute(object o) {} public AttrAttribute(MyEnum e) {} public AttrAttribute(ushort x, int y) {} public AttrAttribute(string str, Type type1, Type type2) {} public int Prop1 { get; set; } public object Prop2 { get; set; } public object[] ObjectArray; } enum MyEnum : int { Val1 = 1, Val2 = 2 } Now, some examples: Here, the the specification binds to the (ushort, int) attribute constructor, with fixed args only. The specification blob starts off with a prolog, followed by the two constructor arguments, then the number of named arguments (zero): [Attr(42, 84)] 0x0001 0x002a 0x00000054 0x0000 An example of string and type encoding: [Attr("MyString", typeof(Array), typeof(System.Windows.Forms.Form))] 0x0001 "MyString" "System.Array" "System.Windows.Forms.Form, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" 0x0000 As you can see, the full assembly specification of a type is only needed if the type isn't in the current assembly or mscorlib. Note, however, that the C# compiler currently chooses to fully-qualify mscorlib types anyway. An object argument (this binds to the object attribute constructor), and two named arguments (a null string is represented by 0xff and the empty string by 0x00) [Attr((ushort)40, Prop1 = 12, Prop2 = "")] 0x0001 U2 0x0028 0x0002 0x54 I4 "Prop1" 0x0000000c 0x54 0x51 "Prop2" STRING 0x00 Right, more complicated now. A type array as a fixed argument: [Attr(new[] { typeof(string), typeof(object) })] 0x0001 0x00000002 // the number of elements "System.String" "System.Object" 0x0000 An enum value, which is simply represented using the underlying value. The CLR works out that it's an enum using information in the attribute constructor signature: [Attr(MyEnum.Val1)] 0x0001 0x00000001 0x0000 And finally, a null array, and an object array as a named argument: [Attr((Type[])null, ObjectArray = new object[] { (byte)2, typeof(decimal), null, MyEnum.Val2 })] 0x0001 0xffffffff 0x0001 0x53 SZARRAY 0x51 "ObjectArray" 0x00000004 U1 0x02 0x50 "System.Decimal" STRING 0xff 0x55 "MyEnum" 0x00000002 As you'll notice, a null object is encoded as a null string value, and a null array is represented using a length of -1 (0xffffffff). How does this affect C#? So, we can now explain why the limits on attribute arguments are so strict in C#. Attribute specification blobs are limited to basic numbers, enums, types, and arrays. As you can see, this is because the raw CLR encoding can only accommodate those types. Special byte patterns have to be used to indicate object, string, Type, or enum values in named arguments; you can't specify an arbitary object type, as there isn't a generalised way of encoding the resulting value in the specification blob. In particular, decimal values can't be encoded, as it isn't a 'built-in' CLR type that has a native representation (you'll notice that decimal constants in C# programs are compiled as several integer arguments to DecimalConstantAttribute). Jagged arrays also aren't natively supported, although you can get around it by using an array as a value to an object argument: [Attr(new object[] { new object[] { new Type[] { typeof(string) } }, 42 })] Finally... Phew! That was a bit longer than I thought it would be. Custom attribute encodings are complicated! Hopefully this series has been an informative look at what exactly goes on inside a .NET assembly. In the next blog posts, I'll be carrying on with the 'Inside Red Gate' series.

    Read the article

  • urgent..haskell mini interpreter

    - by mohamed elshikh
    i'm asked to implement this project and i have problems in part b which is the eval function this is the full describtion of the project You are required to implement an interpreter for mini-Haskell language. An interpreter is dened in Wikipedia as a computer program that executes, i.e. performs, instructions written in a programming language. The interpreter should be able to evaluate functions written in a special notation, which you will dene. A function is dened by: Function name Input Parameters : dened as a list of variables. The body of the function. The body of the function can be any of the following statements: a) Variable: The function may return any of the input variables. b) Arithmetic Expressions: The arithmetic expressions include input variables and addition, sub- traction, multiplication, division and modulus operations on arithmetic expressions. c) Boolean Expressions: The Boolean expressions include the ordering of arithmetic expressions (applying the relationships: <, =<, , = or =) and the anding, oring and negation of Boolean expressions. d) If-then-else statements: where the if keyword is followed by a Boolean expression. The then and else parts may be followed by any of the statements described here. e) Guarded expressions: where each case consists of a boolean expression and any of the statements described here. The expression consists of any number of cases. The rst case whose condition is true, its body should be evaluated. The guarded expression has to terminate with an otherwise case. f) Function calls: the body of the function may have a call to another function. Note that all inputs passed to the function will be of type Int. The output of the function can be of type Int or Bool. To implement the interpreter, you are required to implement the following: a) Dene a datatype for the following expressions: Variables Arithmetic expressions Boolean expressions If-then-else statements Guarded expressions Functions b) Implement the function eval which evaluates a function. It takes 3 inputs: The name of a function to be evaluated represented as a string. A list of inputs to that function. The arguments will always be of datatype Int. A list of functions. Each function is represented as instance of the datatype that you have created for functions. c) Implement the function get_type that returns the type of the function (as a string). The input to this function is the same as in part b. here is what i've done data Variable = v(char) data Arth= va Variable | Add Arth Arth | Sub Arth Arth | Times Arth Arth | Divide Arth Arth data Bol= Great Arth Arth | Small Arth Arth | Geq Arth Arth | Seq Arth Arth | And Bol Bol | Or Bol Bol | Neg Bol data Cond = data Guard = data Fun =cons String [Variable] Body data Body= bodycons(String) |Bol |Cond |Guard |Arth

    Read the article

  • How does formatting works with a PowerShell function that returns a set of elements?

    - by Steve B
    If I write this small function : function Foo { Get-Process | % { $_ } } And if I run Foo It displays only a small subset of properties: PS C:\Users\Administrator> foo Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 86 10 1680 412 31 0,02 5916 alg 136 10 2772 2356 78 0,06 3684 atieclxx 123 7 1780 1040 33 0,03 668 atiesrxx ... ... But even if only 8 columns are shown, there are plenty of other properties (as foo | gm is showing). What is causing this function to show only this 8 properties? I'm actually trying to build a similar function that is returning complex objects from a 3rd party .Net library. The library is flatting a 2 level hierarchy of objects : function Actual { $someDotnetObject.ACollectionProperty.ASecondLevelCollection | % { $_ } } This method is dumping the objects in a list form (one line per property). How can I control what is displayed, keeping the actual object available? I have tried this : function Actual { $someDotnetObject.ACollectionProperty.ASecondLevelCollection | % { $_ } | format-table Property1, Property2 } It shows in a console the expected table : Property1 Property2 --------- --------- ValA ValD ValB ValE ValC ValF But I lost my objects. Running Get-Member on the result shows : TypeName: Microsoft.PowerShell.Commands.Internal.Format.FormatStartData Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() autosizeInfo Property Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo autosizeInfo {get;set;} ClassId2e4f51ef21dd47e99d3c952918aff9cd Property System.String ClassId2e4f51ef21dd47e99d3c952918aff9cd {get;} groupingEntry Property Microsoft.PowerShell.Commands.Internal.Format.GroupingEntry groupingEntry {get;set;} pageFooterEntry Property Microsoft.PowerShell.Commands.Internal.Format.PageFooterEntry pageFooterEntry {get;set;} pageHeaderEntry Property Microsoft.PowerShell.Commands.Internal.Format.PageHeaderEntry pageHeaderEntry {get;set;} shapeInfo Property Microsoft.PowerShell.Commands.Internal.Format.ShapeInfo shapeInfo {get;set;} TypeName: Microsoft.PowerShell.Commands.Internal.Format.GroupStartData Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() ClassId2e4f51ef21dd47e99d3c952918aff9cd Property System.String ClassId2e4f51ef21dd47e99d3c952918aff9cd {get;} groupingEntry Property Microsoft.PowerShell.Commands.Internal.Format.GroupingEntry groupingEntry {get;set;} shapeInfo Property Microsoft.PowerShell.Commands.Internal.Format.ShapeInfo shapeInfo {get;set;} Instead of showing the 2nd level child object members. In this case, I can't pipe the result to functions waiting for this type of argument. How does Powershell is supposed to handle such scenario?

    Read the article

  • How do I use a period in a Quicksilver object (search for a file with a period)?

    - by studgeek
    How do I use a period in a Quicksilver object to do things like search for a file with a period? By default pressing period anywhere in an object causes Quicksilver to switch to text mode. Optimally I would like period to only enter text mode when its at the start of the object. Or perhaps there is a wildcard I can use (* doesn't seem to work and . obviously doesn't :). Or perhaps there is an escape sequence for period?

    Read the article

  • Fatal error: Call to a member function getAttribute() on a non-object in C:\xampp\htdocs\giftshoes\s

    - by Sadiqur Rahman
    I am getting following error message when using Doctrine ORM in Codeigniter. Please help me... ------------------Doctrin Table Defination------------- abstract class BaseShoes extends Doctrine_Record { public function setTableDefinition() { $this-setTableName('shoes'); $this-hasColumn('sku', 'integer', 11, array('primary' = true, 'autoincrement' = false)); $this-hasColumn('name', 'string', 255); $this-hasColumn('keywords', 'string', 255); $this-hasColumn('description', 'string'); $this-hasColumn('manufacturer', 'string', 20); $this-hasColumn('sale_price', 'double'); $this-hasColumn('price', 'double'); $this-hasColumn('url', 'string'); $this-hasColumn('image', 'string'); $this-hasColumn('category', 'string', 50); } public function setUp() { } } ------------------------Doctrin Table Code ------------------- class ShoesTable extends Doctrine_Table { function getAllShoes($from = 0, $total = 15) { $q = Doctrine_Query::create() -from('Shoes s') -limit($total) -offset($from); return $q->execute(array(), Doctrine::HYDRATE_ARRAY); } } -----------------Model Code----------------- class Shoes extends BaseShoes { function __construct() { $this-table = Doctrine::getTable('shoes'); } public function getAllShoes() { $this-table-getAllShoes(); } } -------------------ERROR I am getting-------------------- ( ! ) Fatal error: Call to a member function getAttribute() on a non-object in C:\xampp\htdocs\giftshoes\system\database\doctrine\Doctrine\Record.php on line 1424 Call Stack Time Memory Function Location 1 0.0011 327560 {main}( ) ..\index.php:0 2 0.0363 3210720 require_once( 'C:\xampp\htdocs\giftshoes\system\codeigniter\CodeIgniter.php' ) ..\index.php:116 3 0.0492 3922368 Welcome-Welcome( ) ..\CodeIgniter.php:201 4 0.0817 6234096 CI_Loader-model( ) ..\welcome.php:14 5 0.0824 6248376 Shoes-__construct( ) ..\Loader.php:184 6 0.0824 6248424 Doctrine_Core::getTable( ) ..\Shoes.php:5 7 0.0824 6248424 Doctrine_Connection-getTable( ) ..\Core.php:1080 8 0.0824 6254304 Doctrine_Table-__construct( ) ..\Connection.php:1123 9 0.0841 6396128 Doctrine_Table-initDefinition( ) ..\Table.php:249 10 0.0841 6397472 Shoes-__construct( ) ..\Table.php:301 11 0.0841 6397680 Doctrine_Access-__set( ) ..\Access.php:0 12 0.0841 6397680 Doctrine_Record-set( ) ..\Access.php:60

    Read the article

  • How to clean-up an Entity Framework object context?

    - by Daniel Brückner
    I am adding several entities to an object context. try { forach (var document in documents) { this.Validate(document); // May throw a ValidationException. this.objectContext.AddToDocuments(document); } this.objectContext.SaveChanges(); } catch { // How to clean-up the object context here? throw; } If some of the documents pass the the validation and one fails, all documents that passed the validation remain added to the object context. I have to clean-up the object context because it may be reused and the following can happen. var documentA = new Document { Id = 1, Data = "ValidData" }; var documentB = new Document { Id = 2, Data = "InvalidData" }; var documentC = new Document { Id = 3, Data = "ValidData" }; try { // Adding document B will cause a ValidationException but only // after document A is added to the object context. this.DocumentStore.AddDocuments(new[] { documentA, documentB, documentC }); } catch (ValidationException) { } // Try again without the invalid document B. this.DocumentStore.AddDocuments(new[] { documentA, documentC }); This will again add document A to the object context and in consequence SaveChanges() will throw an exception because of a duplicate primary key. So I have to remove all already added documents in the case of an validation error. I could of course perform the validation first and only add all documents after they have been successfully validated. But sadly this does not solve the whole problem - if SaveChanges() fails all documents still remain add but unsaved. I tried to detach all objects returned by this.objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added) but I am getting a exception stating that the object is not attached. So how do I get rid of all added but unsaved objects?

    Read the article

  • How do I make a jQuery POST function open the new page?

    - by ciclistadan
    I know that a submit button in HTML can submit a form which opens the target page, but how do I cause a jQuery ajax call POST information to a new page and display the new page. I am submitting information that is gathered by clicking elements (which toggle a new class) and then all items with this new class are added to an array and POSTed to a new page. I can get it to POST the data but it seems to be working functioning in an ajax non-refreshing manner, not submitting the page and redirecting to the new page. how might I go about doing this? here's the script section: //onload function $(function() { //toggles items to mark them for purchase //add event handler for selecting items $(".line").click(function() { //get the lines item number var item = $(this).toggleClass("select").attr("name"); }); $('#process').click(function() { var items = []; //place selected numbers in a string $('.line.select').each(function(index){ items.push($(this).attr('name')); }); $.ajax({ type: 'POST', url: 'additem.php', data: 'items='+items, success: function(){ $('#menu').hide(function(){ $('#success').fadeIn(); }); } }); }); return false; }); any pointers would be great!! thanks

    Read the article

  • jQuery, unable to store data returned by $.get function.

    - by Deepak Prasanna
    I am trying to turn div#sidebar into a sidebar in my app. My code looks like the one below. $('#sidebar').userProfile(); jQuery.fn.userProfile = function() { $.get('/users/profile', function(data){ $(this).html(data); }); }; It didnt work because, I found the this (inside the $.get function) here contexts to the get request and not $('#sidebar'). Then I tried something like below. $('#sidebar').userProfile(); #This doesnot work jQuery.fn.userProfile = function() { var side_bar = null; $.get('/users/profile', function(data){ side_bar = data; }); $(this).html(side_bar); console.log(side_bar); }; This doesnt work either. In firebug console I see Null which I am setting on top when I am declaring the variable.Atlast I made it work by changing my code to something like below by hardcoding the selector. #This works, but I cannot turn any element to a sidebar which is sick. jQuery.fn.userProfile = function() { $.get('/users/profile', function(data){ $('#sidebar').html(data); }); }; But this is not I wanted because I wanted to turn any element to a sidebar. Where am I goin wrong or which is the correct way of doing it?

    Read the article

  • Move <option> to top of list with Javascript

    - by Adam
    I'm trying to create a button that will move the currently selected OPTION in a SELECT MULTIPLE list to the top of that list. I currently have OptionTransfer.js implemented, which is allowing me to move items up and down the list. I want to add a new function function MoveOptionTop(obj) { ... } Here is the source of OptionTransfer.js // =================================================================== // Author: Matt Kruse // WWW: http://www.mattkruse.com/ // // NOTICE: You may use this code for any purpose, commercial or // private, without any further permission from the author. You may // remove this notice from your final code if you wish, however it is // appreciated by the author if at least my web site address is kept. // // You may *NOT* re-distribute this code in any way except through its // use. That means, you can include it in your product, or your web // site, or any other form where the code is actually being used. You // may not put the plain javascript up on your site for download or // include it in your javascript libraries for download. // If you wish to share this code with others, please just point them // to the URL instead. // Please DO NOT link directly to my .js files from your site. Copy // the files to your server and use them there. Thank you. // =================================================================== /* SOURCE FILE: selectbox.js */ function hasOptions(obj){if(obj!=null && obj.options!=null){return true;}return false;} function selectUnselectMatchingOptions(obj,regex,which,only){if(window.RegExp){if(which == "select"){var selected1=true;var selected2=false;}else if(which == "unselect"){var selected1=false;var selected2=true;}else{return;}var re = new RegExp(regex);if(!hasOptions(obj)){return;}for(var i=0;i(b.text+"")){return 1;}return 0;});for(var i=0;i3){var regex = arguments[3];if(regex != ""){unSelectMatchingOptions(from,regex);}}if(!hasOptions(from)){return;}for(var i=0;i=0;i--){var o = from.options[i];if(o.selected){from.options[i] = null;}}if((arguments.length=0;i--){if(obj.options[i].selected){if(i !=(obj.options.length-1) && ! obj.options[i+1].selected){swapOptions(obj,i,i+1);obj.options[i+1].selected = true;}}}} function removeSelectedOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){var o=from.options[i];if(o.selected){from.options[i] = null;}}from.selectedIndex = -1;} function removeAllOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){from.options[i] = null;}from.selectedIndex = -1;} function addOption(obj,text,value,selected){if(obj!=null && obj.options!=null){obj.options[obj.options.length] = new Option(text, value, false, selected);}} /* SOURCE FILE: OptionTransfer.js */ function OT_transferLeft(){moveSelectedOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferRight(){moveSelectedOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllLeft(){moveAllOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllRight(){moveAllOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_saveRemovedLeftOptions(f){this.removedLeftField = f;} function OT_saveRemovedRightOptions(f){this.removedRightField = f;} function OT_saveAddedLeftOptions(f){this.addedLeftField = f;} function OT_saveAddedRightOptions(f){this.addedRightField = f;} function OT_saveNewLeftOptions(f){this.newLeftField = f;} function OT_saveNewRightOptions(f){this.newRightField = f;} function OT_update(){var removedLeft = new Object();var removedRight = new Object();var addedLeft = new Object();var addedRight = new Object();var newLeft = new Object();var newRight = new Object();for(var i=0;i0){str=str+delimiter;}str=str+val;}return str;} function OT_setDelimiter(val){this.delimiter=val;} function OT_setAutoSort(val){this.autoSort=val;} function OT_setStaticOptionRegex(val){this.staticOptionRegex=val;} function OT_init(theform){this.form = theform;if(!theform[this.left]){alert("OptionTransfer init(): Left select list does not exist in form!");return false;}if(!theform[this.right]){alert("OptionTransfer init(): Right select list does not exist in form!");return false;}this.left=theform[this.left];this.right=theform[this.right];for(var i=0;i

    Read the article

  • how to create an function using jquery live? [Solved]

    - by Mahmoud
    Hey all i am trying to create a function that well keep the user in lightbox images while he adds to cart, for a demo you can visit secure.sabayafrah.com username: mahmud password: mahmud when you click at any image it well enlarge using lightbox v2, so when the user clicks at the image add, it well refresh the page, when i asked about it at jcart support form they informed me to use jquery live, but i dont know how to do it but as far as i tried this code which i used but still nothing is happening jQuery(function($) { $('#button') .livequery(eventType, function(event) { alert('clicked'); // to check if it works or not return false; }); }); i also used jQuery(function($) { $('input=[name=addto') .livequery(eventType, function(event) { alert('clicked'); // to check if it works or not return false; }); }); yet nothing worked for code to create those images http://pasite.org/code/572 Update 1: i have done this function adding(form){ $( "form.jcart" ).livequery('submit', function() {var b=$(this).find('input[name=<?php echo $jcart['item_id']?>]').val();var c=$(this).find('input[name=<?php echo $jcart['item_price']?>]').val();var d=$(this).find('input[name=<?php echo $jcart['item_name']?>]').val();var e=$(this).find('input[name=<?php echo $jcart['item_qty']?>]').val();var f=$(this).find('input[name=<?php echo $jcart['item_add']?>]').val();$.post('<?php echo $jcart['path'];?>jcart-relay.php',{"<?php echo $jcart['item_id']?>":b,"<?php echo $jcart['item_price']?>":c,"<?php echo $jcart['item_name']?>":d,"<?php echo $jcart['item_qty']?>":e,"<?php echo $jcart['item_add']?>":f} }); return false; } and it seems to add to jcart but yet it still refreshes

    Read the article

  • Does it exist: smart pointer, owned by one object allowing access.

    - by Noah Roberts
    I'm wondering if anyone's run across anything that exists which would fill this need. Object A contains an object B. It wants to provide access to that B to clients through a pointer (maybe there's the option it could be 0, or maybe the clients need to be copiable and yet hold references...whatever). Clients, lets call them object C, would normally, if we're perfect developers, be written carefully so as to not violate the lifetime semantics of any pointer to B they might have...but we're not perfect, in fact we're pretty dumb half the time. So what we want is for object C to have a pointer to object B that is not "shared" ownership but that is smart enough to recognize a situation in which the pointer is no longer valid, such as when object A is destroyed or it destroys object B. Accessing this pointer when it's no longer valid would cause an assertion/exception/whatever. In other words, I wish to share access to data in a safe, clear way but retain the original ownership semantics. Currently, because I've not been able to find any shared pointer in which one of the objects owns it, I've been using shared_ptr in place of having such a thing. But I want clear owneship and shared/weak pointer doesn't really provide that. Would be nice further if this smart pointer could be attached to member variables and not just hold pointers to dynamically allocated memory regions. If it doesn't exist I'm going to make it, so I first want to know if someone's already released something out there that does it. And, BTW, I do realize that things like references and pointers do provide this sort of thing...I'm looking for something smarter.

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Alter a function as a parameter before evaluating it in R?

    - by Shane
    Is there any way, given a function passed as a parameter, to alter its input parameter string before evaluating it? Here's pseudo-code for what I'm hoping to achieve: test.func <- function(a, b) { # here I want to alter the b expression before evaluating it: b(..., val1=a) } Given the function call passed to b, I want to add in a as another parameter without needing to always specify ... in the b call. So the output from this test.func call should be: test.func(a="a", b=paste(1, 2)) "1" "2" "a" Edit: Another way I could see doing something like this would be if I could assign the additional parameter within the scope of the parent function (again, as pseudo-code); in this case a would be within the scope of t1 and hence t2, but not globally assigned: t2 <- function(...) { paste(a=a, ...) } t1 <- function(a, b) { local( { a <<- a; b } ) } t1(a="a", b=t2(1, 2)) This is somewhat akin to currying in that I'm nesting the parameter within the function itself. Edit 2: Just to add one more comment to this: I realize that one related approach could be to use "prototype-based programming" such that things would be inherited (which could be achieved with the proto package). But I was hoping for a easier way to simply alter the input parameters before evaluating in R.

    Read the article

  • About Web service ,how to use Ajax to call a specific member function of a class?

    - by Liu chwen
    I'm trying to build a web service by PHP. In my case, I called the getINFO(), but the return value on client side always null. Have no idea to solve this problem.. Here's the SOAPserver code(WS.WEB_s.php): require("WEB_s.php"); ini_set("soap.wsdl_cache_enabled", 0); $server = new SoapServer('wsdl/WEB_s.wsdl'); $server->setClass("WEB_s"); $server->handle(); Where the main Class is(WEB_s.php): final class WEB_s { public function getINFO(){ $JsonOutput = '{"key":"value",...}'; return $JsonOutput; } public function setWAN($setCommand,$newConfigfilePath){ $bOutput; return $bOutput; } } And Client side: $(document).ready(function(){ $('#qqq').button().click(function(){ var soapMessage = LoginSoap($('#uid').val(),$('#pwd').val()); alert('soapMessage'); $.ajax({ //url: 'libraries/WS.WEB_s.php/WEB_s/getINFO',//success , return null //url: 'libraries/WS.WEB_s.php/', //success , return null url: 'libraries/WS.WEB_s.php/getINFO',//success , return null type: 'GET', timeout: (10* 1000), contentType: "text/xml", dataType: "xml", success: function( data,textStatus,jqXHR){ alert('Server success(' + data+')('+ textStatus + ')(' + jqXHR + ')'); }, error: function (request, status, error) { alert('Server Error(' + status+')->'+error); }, complete: function (jqXHR, textStatus) { alert('Server success(' + jqXHR+')('+ textStatus + ')'); } }); }); }); The following is the corresponding WSDL file : http://codepaste.net/95wq9b

    Read the article

  • Detecting what the target object is when NullReferenceException is thrown.

    - by StingyJack
    I'm sure we all have received the wonderfully vague "Object reference not set to instance of an Object" exception at some time or another. Identifying the object that is the problem is often a tedious task of setting breakpoints and inspecting all members in each statement. Does anyone have any tricks to easily and efficiently identify the object that causes the exception, either via programmatical means or otherwise? --edit It seems I was vague like the exception =). The point is to _not have to debug the app to find the errant object. The compiler/runtime does know that the object has been allocated, and that the object has not yet been instantiated. Is there a way to extract / identify those details in a caught exception @ W. Craig Trader Your explanation that it is a result of a design problem is probably the best answer I could get. I am fairly compulsive with defensive coding and have managed to get rid of most of these errors after fixing my habits over time. The remaining ones just tweak me to no end, and lead me to posting this question to the community. Thanks for everyone's suggestions.

    Read the article

  • How to use a variable in a function expression which is injected in a page?

    - by anonymous
    I'm trying to inject a function into a webpage via Chrome extension content script by: function inject(code) { var actualCode = '(' + code + ')();'; var script = document.createElement('script'); script.textContent = actualCode; (document.head||document.documentElement).appendChild(script); script.parentNode.removeChild(script); } var myObj = person; // myObj/person is passed in from elsewhere var fn = function() { alert(myObj.name); }; inject(fn); // myObj undefined My issue is, since fn is a function expression, I can't pass in myObj.personName. So my question is, how can I construct a function expression that includes a variable? Do I do some sort of string concatenation instead? I also tried to pass the object to the function, as follows: function inject(code, myObj) { var actualCode = '(' + code + ')(' + myObj +');'; ... But this did not work, and caused a "Uncaught SyntaxError: Unexpected identifier" error. Related: Building a Chrome Extension - Inject code in a page using a Content script

    Read the article

  • Java: over-typed structures? To have many types in Object[]?

    - by HH
    Term over-type structure = a data structure that accepts different types, can be primitive or user-defined. I think ruby supports many types in structures such as tables. I tried a table with types 'String', 'char' and 'File' in Java but errs. How can I have over-typed structure in Java? How to show types in declaration? What about in initilization? Suppose a structure: INDEX VAR FILETYPE //0 -> file FILE //1 -> lineMap SizeSequence //2 -> type char //3 -> binary boolean //4 -> name String //5 -> path String Code import java.io.*; import java.util.*; public class Object { public static void print(char a) { System.out.println(a); } public static void print(String s) { System.out.println(s); } public static void main(String[] args) { Object[] d = new Object[6]; d[0] = new File("."); d[2] = 'T'; d[4] = "."; print(d[2]); print(d[4]); } } Errors Object.java:18: incompatible types found : java.io.File required: Object d[0] = new File("."); ^ Object.java:19: incompatible types found : char required: Object d[2] = 'T'; ^

    Read the article

  • Why newly created entity object with navigation property is automaticly added to ObjectContext?

    - by Levelbit
    I have to entities: Company and Location (one to many). When I create new Location entity object and assign navigation property(Company) with the navigation property of already existing Location object (Location _new = new Location(); _new.Company = _old.Company). It seems that at that point newly created object is added to Object Context automatically, because when I call SaveChanges method that object is insert to database although I didn't call ObjectContext.AddObject(_new). I'm new in EF so there is probably reason why I have result like this? Is there need to assign also CompanyReference filed too and how to do it? IDaoFactory daoFactory = new DaoFactory(); ILocationDao locaitonDao = daoFactory.GetLocationDao(); IEnumerable<Location> locations = locaitonDao.GetLocations(); Location _old = locations.First(); Location _new = new Location(); _new.LocationName = _old.LocationName; _new.Company = _old.Company;// 1 _new.Address = _old.Address; //... ContactEntities.SaveChanges();//2 If I execute line (1) instantly _new object is added to object context and I can see additional datarow in my datagrid after line (2) is executed.

    Read the article

  • How can I communicate with an Object created in another JFrame?

    - by user3093422
    so my program basically consists of two frames. As I click a button on Frame1, Frame2 pops up, and when I click a button on Frame2, and Object is created and the window closes. Now, I need to be able to use the methods of Object in my Frame1, how can this be achieved? I am kind of new to Object-Oriented Programming, sorry, but it's hard to me to explain the situation. Thanks! I will try to put a random code for pure example below. JFrame 1: public class JFrame1 extends JFrame{ variables.. public JFrame1(){ GUIcomponents.... } public static void main(String[] args) { JFrame1 aplicacion = new JFrame1(); aplicacion.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } private class ActList implements ActionListener { public void actionPerformed(ActionEvent event) { new JFrame2(); } } } JFrame 2: public class JFrame2 extends JFrame{ variables.. public JFrame2(){ GUIcomponents.... } private class ActList implements ActionListener { public void actionPerformed(ActionEvent event) { Object object = new Object(); setVisible(false); } } } Sorry if it's messy, I made it in the moment. So yeah, basically I want to JFrame1 to be able to use the getters and settes from Object, which was created in JFrame2. What should I do? Once again, thanks!

    Read the article

  • collsion issues with quadtree [on hold]

    - by QuantumGamer
    So i implemented a Quad tree in Java for my 2D game and everything works fine except for when i run my collision detection algorithm, which checks if a object has hit another object and which side it hit.My problem is 80% of the time the collision algorithm works but sometimes the objects just go through each other. Here is my method: private void checkBulletCollision(ArrayList object) { quad.clear(); // quad is the quadtree object for(int i=0; i < object.size();i++){ if(object.get(i).getId() == ObjectId.Bullet) // inserts the object into quadtree quad.insert((Bullet)object.get(i)); } ArrayList<GameObject> returnObjects = new ArrayList<>(); // Uses Quadtree to determine to calculate how many // other bullets it can collide with for(int i=0; i < object.size(); i++){ returnObjects.clear(); if(object.get(i).getId() == ObjectId.Bullet){ quad.retrieve(returnObjects, object.get(i).getBoundsAll()); for(int k=0; k < returnObjects.size(); k++){ Bullet bullet = (Bullet) returnObjects.get(k); if(getBoundsTop().intersects(bullet.getBoundsBottom())){ vy = speed; bullet.vy = -speed; } if(getBoundsBottom().intersects(bullet.getBoundsTop())){ vy = -speed; bullet.vy = speed; } if(getBoundsLeft().intersects(bullet.getBoundsRight())){ vx =speed; bullet.vx = -speed; } if(getBoundsRight().intersects(bullet.getBoundsLeft())){ vx = -speed; bullet.vx = speed; } } } } } Any help would be appreciated. Thanks in advance.

    Read the article

  • Solution to Jira web service getWorklogs method error: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type

    - by DigiMortal
    When using Jira web service methods that operate on work logs you may get the following error when running your .NET application: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type. In this posting I will show you solution to this problem. I don’t want to go to deep in details about this problem. I think it’s enough for this posting to mention that this problem is related to one small conflict between .NET web service support and Axis. Of course, Jira team is trying to solve it but until this problem is solved you can use solution provided here. There is good solution to this problem given by Jira forum user Kostadin. You can find it from Jira forum thread RemoteWorkLog serialization from Soap Service in C#. Solution is simple – you have to use SOAP extension class to replace new class names with old ones that .NET found from WSDL. Here is the code by Kostadin. public class JiraSoapExtensions : SoapExtension {     private Stream _streamIn;     private Stream _streamOut;       public override void ProcessMessage(SoapMessage message)     {         string messageAsString;         StreamReader reader;         StreamWriter writer;           switch (message.Stage)         {             case SoapMessageStage.BeforeSerialize:                 break;             case SoapMessageStage.AfterDeserialize:                 break;             case SoapMessageStage.BeforeDeserialize:                 reader = new StreamReader(_streamOut);                 writer = new StreamWriter(_streamIn);                 messageAsString = reader.ReadToEnd();                 switch (message.MethodInfo.Name)                 {                     case "getWorklogs":                     case "addWorklogWithNewRemainingEstimate":                     case "addWorklogAndAutoAdjustRemainingEstimate":                     case "addWorklogAndRetainRemainingEstimate":                         messageAsString = messageAsString.                             .Replace("RemoteWorklogImpl", "RemoteWorklog")                             .Replace("service", "beans");                         break;                 }                 writer.Write(messageAsString);                 writer.Flush();                 _streamIn.Position = 0;                 break;             case SoapMessageStage.AfterSerialize:                 _streamIn.Position = 0;                 reader = new StreamReader(_streamIn);                 writer = new StreamWriter(_streamOut);                 messageAsString = reader.ReadToEnd();                 writer.Write(messageAsString);                 writer.Flush(); break;         }     }       public override Stream ChainStream(Stream stream)     {         _streamOut = stream;         _streamIn = new MemoryStream();         return _streamIn;     }       public override object GetInitializer(Type type)     {         return GetType();     }       public override object GetInitializer(LogicalMethodInfo info,         SoapExtensionAttribute attribute)     {         return null;     }       public override void Initialize(object initializer)     {     } } To get this extension work with Jira web service you have to add the following block to your application configuration file (under system.web section). <webServices>   <soapExtensionTypes>    <add type="JiraStudioExperiments.JiraSoapExtensions,JiraStudioExperiments"           priority="1"/>   </soapExtensionTypes> </webServices> Weird thing is that after successfully using this extension and disabling it everything still works.

    Read the article

  • Understanding C# async / await (2) Awaitable / Awaiter Pattern

    - by Dixin
    What is awaitable Part 1 shows that any Task is awaitable. Actually there are other awaitable types. Here is an example: Task<int> task = new Task<int>(() => 0); int result = await task.ConfigureAwait(false); // Returns a ConfiguredTaskAwaitable<TResult>. The returned ConfiguredTaskAwaitable<TResult> struct is awaitable. And it is not Task at all: public struct ConfiguredTaskAwaitable<TResult> { private readonly ConfiguredTaskAwaiter m_configuredTaskAwaiter; internal ConfiguredTaskAwaitable(Task<TResult> task, bool continueOnCapturedContext) { this.m_configuredTaskAwaiter = new ConfiguredTaskAwaiter(task, continueOnCapturedContext); } public ConfiguredTaskAwaiter GetAwaiter() { return this.m_configuredTaskAwaiter; } } It has one GetAwaiter() method. Actually in part 1 we have seen that Task has GetAwaiter() method too: public class Task { public TaskAwaiter GetAwaiter() { return new TaskAwaiter(this); } } public class Task<TResult> : Task { public new TaskAwaiter<TResult> GetAwaiter() { return new TaskAwaiter<TResult>(this); } } Task.Yield() is a another example: await Task.Yield(); // Returns a YieldAwaitable. The returned YieldAwaitable is not Task either: public struct YieldAwaitable { public YieldAwaiter GetAwaiter() { return default(YieldAwaiter); } } Again, it just has one GetAwaiter() method. In this article, we will look at what is awaitable. The awaitable / awaiter pattern By observing different awaitable / awaiter types, we can tell that an object is awaitable if It has a GetAwaiter() method (instance method or extension method); Its GetAwaiter() method returns an awaiter. An object is an awaiter if: It implements INotifyCompletion or ICriticalNotifyCompletion interface; It has an IsCompleted, which has a getter and returns a Boolean; it has a GetResult() method, which returns void, or a result. This awaitable / awaiter pattern is very similar to the iteratable / iterator pattern. Here is the interface definitions of iteratable / iterator: public interface IEnumerable { IEnumerator GetEnumerator(); } public interface IEnumerator { object Current { get; } bool MoveNext(); void Reset(); } public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IDisposable, IEnumerator { T Current { get; } } In case you are not familiar with the out keyword, please find out the explanation in Understanding C# Covariance And Contravariance (2) Interfaces. The “missing” IAwaitable / IAwaiter interfaces Similar to IEnumerable and IEnumerator interfaces, awaitable / awaiter can be visualized by IAwaitable / IAwaiter interfaces too. This is the non-generic version: public interface IAwaitable { IAwaiter GetAwaiter(); } public interface IAwaiter : INotifyCompletion // or ICriticalNotifyCompletion { // INotifyCompletion has one method: void OnCompleted(Action continuation); // ICriticalNotifyCompletion implements INotifyCompletion, // also has this method: void UnsafeOnCompleted(Action continuation); bool IsCompleted { get; } void GetResult(); } Please notice GetResult() returns void here. Task.GetAwaiter() / TaskAwaiter.GetResult() is of such case. And this is the generic version: public interface IAwaitable<out TResult> { IAwaiter<TResult> GetAwaiter(); } public interface IAwaiter<out TResult> : INotifyCompletion // or ICriticalNotifyCompletion { bool IsCompleted { get; } TResult GetResult(); } Here the only difference is, GetResult() return a result. Task<TResult>.GetAwaiter() / TaskAwaiter<TResult>.GetResult() is of this case. Please notice .NET does not define these IAwaitable / IAwaiter interfaces at all. As an UI designer, I guess the reason is, IAwaitable interface will constraint GetAwaiter() to be instance method. Actually C# supports both GetAwaiter() instance method and GetAwaiter() extension method. Here I use these interfaces only for better visualizing what is awaitable / awaiter. Now, if looking at above ConfiguredTaskAwaitable / ConfiguredTaskAwaiter, YieldAwaitable / YieldAwaiter, Task / TaskAwaiter pairs again, they all “implicitly” implement these “missing” IAwaitable / IAwaiter interfaces. In the next part, we will see how to implement awaitable / awaiter. Await any function / action In C# await cannot be used with lambda. This code: int result = await (() => 0); will cause a compiler error: Cannot await 'lambda expression' This is easy to understand because this lambda expression (() => 0) may be a function or a expression tree. Obviously we mean function here, and we can tell compiler in this way: int result = await new Func<int>(() => 0); It causes an different error: Cannot await 'System.Func<int>' OK, now the compiler is complaining the type instead of syntax. With the understanding of the awaitable / awaiter pattern, Func<TResult> type can be easily made into awaitable. GetAwaiter() instance method, using IAwaitable / IAwaiter interfaces First, similar to above ConfiguredTaskAwaitable<TResult>, a FuncAwaitable<TResult> can be implemented to wrap Func<TResult>: internal struct FuncAwaitable<TResult> : IAwaitable<TResult> { private readonly Func<TResult> function; public FuncAwaitable(Func<TResult> function) { this.function = function; } public IAwaiter<TResult> GetAwaiter() { return new FuncAwaiter<TResult>(this.function); } } FuncAwaitable<TResult> wrapper is used to implement IAwaitable<TResult>, so it has one instance method, GetAwaiter(), which returns a IAwaiter<TResult>, which wraps that Func<TResult> too. FuncAwaiter<TResult> is used to implement IAwaiter<TResult>: public struct FuncAwaiter<TResult> : IAwaiter<TResult> { private readonly Task<TResult> task; public FuncAwaiter(Func<TResult> function) { this.task = new Task<TResult>(function); this.task.Start(); } bool IAwaiter<TResult>.IsCompleted { get { return this.task.IsCompleted; } } TResult IAwaiter<TResult>.GetResult() { return this.task.Result; } void INotifyCompletion.OnCompleted(Action continuation) { new Task(continuation).Start(); } } Now a function can be awaited in this way: int result = await new FuncAwaitable<int>(() => 0); GetAwaiter() extension method As IAwaitable shows, all that an awaitable needs is just a GetAwaiter() method. In above code, FuncAwaitable<TResult> is created as a wrapper of Func<TResult> and implements IAwaitable<TResult>, so that there is a  GetAwaiter() instance method. If a GetAwaiter() extension method  can be defined for Func<TResult>, then FuncAwaitable<TResult> is no longer needed: public static class FuncExtensions { public static IAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { return new FuncAwaiter<TResult>(function); } } So a Func<TResult> function can be directly awaited: int result = await new Func<int>(() => 0); Using the existing awaitable / awaiter - Task / TaskAwaiter Remember the most frequently used awaitable / awaiter - Task / TaskAwaiter. With Task / TaskAwaiter, FuncAwaitable / FuncAwaiter are no longer needed: public static class FuncExtensions { public static TaskAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { Task<TResult> task = new Task<TResult>(function); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter<TResult>. } } Similarly, with this extension method: public static class ActionExtensions { public static TaskAwaiter GetAwaiter(this Action action) { Task task = new Task(action); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter. } } an action can be awaited as well: await new Action(() => { }); Now any function / action can be awaited: await new Action(() => HelperMethods.IO()); // or: await new Action(HelperMethods.IO); If function / action has parameter(s), closure can be used: int arg0 = 0; int arg1 = 1; int result = await new Action(() => HelperMethods.IO(arg0, arg1)); Using Task.Run() The above code is used to demonstrate how awaitable / awaiter can be implemented. Because it is a common scenario to await a function / action, so .NET provides a built-in API: Task.Run(): public class Task2 { public static Task Run(Action action) { // The implementation is similar to: Task task = new Task(action); task.Start(); return task; } public static Task<TResult> Run<TResult>(Func<TResult> function) { // The implementation is similar to: Task<TResult> task = new Task<TResult>(function); task.Start(); return task; } } In reality, this is how we await a function: int result = await Task.Run(() => HelperMethods.IO(arg0, arg1)); and await a action: await Task.Run(() => HelperMethods.IO());

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >