Search Results

Search found 62355 results on 2495 pages for 'data collection'.

Page 125/2495 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • A programming language for teaching data structures and algorithms with? [closed]

    - by Andreas Grech
    Possible Duplicate: Choice of programming language for learning data structures and algorithms Teachers have different opinions on what programming language they would choose to teach data structures and algorithms with. Some would prefer a lower level language such as C because it allows the student to learn more about what goes on beyond the abstractions in terms of memory allocation and deallocation and pointers and pointer arithmetic. On the other hand, others would say that they would prefer a higher level language like Java because it allows the student to learn more about the concepts of the structures and the algorithm design rather than 'waste time' and fiddle around with memory segmentation faults and all the blunders that come with languages where memory management is manual. What is your take on this issue? And also, please post any references you may know of that also discuss this argument.

    Read the article

  • Freeing Java memory at a specific point in time

    - by Marcus
    Given this code, where we load a lot of data, write it to a file, and then run an exe.. void myMethod() { Map stuff = createMap(); //Consumes 250 MB memory File file = createFileInput(stuff); //Create input for exe runExectuable(file); //Run Windows exe } What is the best way to release the memory consumed by stuff prior to running the exe? We don't need this in memory any more as we have dumped the data to a file for input to the exe... Is the best method to just set stuff = null prior to runExecutable(file)?

    Read the article

  • Saving core data in a thread, how to ensure its done writing before quitting?

    - by Shizam
    So I'm saving small images to core data which take a really short amount of time to save, like .2 seconds but I'm doing it while the user is flipping through a scroll view so in order to improve responsiveness I'm moving the saving to a thread. This works great, everything gets saved and the app is responsive. However, there is one thing in the core-data + multithreading doco that worries me: "In Cocoa, only the main thread is not-detached. If you need to save on other threads, you must write additional code such that the main thread prevents the application from quitting until all the save operation is complete." Ok, how do you do that? It only needs to last ~ .2 seconds and its rarely going to happen since the chance of the app quitting as something is saving is very low. How do I run something on the main thread that'll prevent the app from quitting AND not block the gui? Thanks

    Read the article

  • iphone application development -- passing data to and from the server.

    - by SAPNA
    i have to develop an -phone application .user logs in through the i-phone and gets data stored in the database.our database is created in MY SQL. and website is developed in (classic) ASP.interface is created in i-phone SDK. connection is remaining.what should i use for transferring data to server and from the server. JSON or SOAP.is XML parsing necessary.actually i am very new to this field. so a bit confused. we have some time left for completing our application.so in urgent need of help. thank you in advance.

    Read the article

  • iOS Development: Can I store an array of integers in a Core Data object without creating a new table to represent the array?

    - by BeachRunnerJoe
    Hello. I'm using Core Data and I'm trying to figure out the simplest way to store an array of integers in one of my Core Data entities. Currently, my entities contain various arrays of objects that are more complex than a single number, so it makes sense to represent those arrays as tables in my DB and attach them using relationships. If I want to store a simple array of integers, do I need to create a new table with a single column and attach it using a one-to-many relationship? Or is there a more simple way? Thanks in advance for your wisdom!

    Read the article

  • Can I add a custom method to Core Data-generated classes?

    - by Andy
    I've got a couple of Core Data-generated class files that I'd like to add custom methods to. I don't need to add any instance variables. How can I do this? I tried adding a category of methods: // ContactMethods.h (my category on Core Data-generated "Contact" class) #import "Contact.h" @interface Contact (ContactMethods) -(NSString*)displayName; @end ... // ContactMethods.m #import "ContactMethods.h" @implementation Contact (ContactMethods) -(NSString*)displayName { return @"Some Name"; // this is test code } @end This doesn't work, though. I get a compiler message that "-NSManagedObject may not respond to 'displayName' " and sure enough, when I run the app, I don't get "Some Name" where I should be seeing it.

    Read the article

  • How to use Joomla to allow users to create/update data on my site?

    - by gromo
    Right now Im using an extension called chronoforms but Im open to anything that works. I can make forms just fine, and it saves the submitted data to a table. Where I am stuck is, how do I then allow my front end users who filled out and submitted the form to go back, view their old answers, change them, and resubmit them or resave them. In order to do this I think I would need to be able to retrieve their last answers from the table on which it is stored, replug the last specific data record for that user only back into the original form on the front end and let the user change and resubmit it? Unless there is a better way to allow the user to change their answers to a form and resave them? If anyone knows how to do this, or if there is a better way I should be going about doing this, (perhaps there is another extension that handles this kind of thing) please let me know. Thanks!

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • Exécutez des applications vintage sur votre navigateur grâce à Historical Software Collection, un projet de l'Internet Archive

    Exécutez des applications vintage sur votre navigateur grâce à Historical Software Collection, Un projet de l'Internet Archive La Historical Software Collection est un clin d'oeil aux applications des années 80. Une invite d'Internet Archive a se replonger dans son enfance pour certains le temps d'une partie de jeu vidéo. D'autres y verront une occasion de constater l'évolution technologique de ces 30 dernières années.Grâce à JMESS, un portage de l'émulateur MESS (Multi Emulator Super System)...

    Read the article

  • Looking for app to work fluidly with CSV data in graph form

    - by Aszurom
    It often occurs to me that if I had a good tool for viewing CSV data in graphical format, and comparing two sets of numbers to each other, I could do a great deal of meaningful trend watching and data interpretation. For example, perfmon can output quite a lot of data about a server into a CSV file, but there's no good way to view it. A lot of scripts could/have been written that would populate CSV files. I could write these all day long. My problem is that I need a great viewer. I've seen quite a few things that will take a CSV file and after a lot of tweaking and user adjustment produce a static gif/png image. A static image doesn't do me a lot of good, because I have to look at it, then re-calibrate the parameters of the program, regenerate the image, repeat. That sucks. I could do this in Excel. Ideally, I would want a FLUID graph viewer. On the fly, I can adjust how much of my timeline I'm viewing. I could adjust the scaling so that one big spike doesn't make 99.9% of the data an unreadable line across the bottom of the X axis. Stuff like that. I should be able to say "show me CSV column 3 and column 5 as graphs. Show me the data scaled for 20 or 150 entries, and let me slide that window up and down the column of data. Auto scale to fit 95% of data within the Y axis and let crazy spikes go off the screen." Maybe I'm terribly spoiled by how you can drag, zoom, and slide data around on my iPad. I want to be able to view a spreadsheet of data with that fluidity and not have to guess at what sort of static snapshot I want to create from it. I don't want to have to make a study of how to tweak some data plotting program to let me import my file and do what I could just do in Excel. I want to scale, zoom, and transform my graph on the fly and then export a snapshot of it once I have it the way I want it. Is there anything out there that fills this need? I'll take linux, osx, win32 or even iOS suggestions.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Change number in last row in data seperated with commas in NotePad++

    - by user329311
    I have rows of data all separated with commas. How can I replace the last numbers after the last commas with the number 5 in NotePad++? For example: How do I replace 9, 17 and 124 with 5 in the below data? I have millions of rows though of data and Excel doesn't have enough rows for all the data. Sample data: 2009.10.21,05:31,1.49312,1.49312,1.49306,1.49306,9 2009.10.21,05:32,1.49306,1.49308,1.49303,1.49305,17 2009.10.21,05:33,1.49305,1.4931,1.49305,1.49309,124 Thank you for your help.

    Read the article

  • Accessing temperature data on ESXi5.1?

    - by Ovesh
    For ESXi5.1 (VMWare vSphere) images, I can see temperature data on the vSphere user interface (under Monitor / Hardware Status). I tried scouring the available snmp data using snmpwalk, but can't find the data anywhere in there. Maybe I'm missing something. Does anybody know the right MIB for temperature data? Otherwise, ow can that data be accessed? By the way, this is a machine installed from an image provided by HP.

    Read the article

  • Filter a wpf collectionviewsource in VB?

    - by Johnny Westlake
    Hi, I want to filter a collectionviewsource using a filter I've written, but I'm not sure how I can apply the filter to it? Here is my collection view source: <Grid.Resources> <CollectionViewSource x:Key="myCollectionView" Source="{Binding Path=Query4, Source={x:Static Application.Current}}"> <CollectionViewSource.SortDescriptions> <scm:SortDescription PropertyName="ContactID" Direction="Descending"/> </CollectionViewSource.SortDescriptions> </CollectionViewSource> </Grid.Resources> I have implemented a filter as such: Private Sub WorkerFilter(ByVal sender As Object, ByVal e As FilterEventArgs) Dim value As Object = CType(e.Item, System.Data.DataRow)("StaffSection") If (Not value Is Nothing) And (Not value Is DBNull.Value) Then If (value = "Builder") Or (value = "Office Staff") Then e.Accepted = True Else e.Accepted = False End If End If End Sub So how can I get the CollectionViewSource filtered by the filter on load? Could you please give all hte code I need (only a few lines I figure) as I'm quite new to coding. Thanks guys EDIT: For the record, <CollectionViewSource x:Key="myCollectionView" Filter="WorkerFilter" ... /> gives me the error: Failed object initialization (ISupportInitialize.EndInit). 'System.Windows.Data.BindingListCollectionView' view does not support filtering. Error at object 'myCollectionView'

    Read the article

  • How do I debug a crash when I run my garbage-collected app in Rosetta?

    - by Rob Keniger
    I have a Universal app which is targeting 10.5 and which uses garbage collection. I am building for ppc, i386 and x86_64. I don't have access to a physical PowerPC machine so I am trying to use Rosetta to confirm that the PowerPC portion of the app works correctly. However, as soon as the app is launched in Rosetta it immediately crashes with the following crash log: Process: FooApp [91567] Path: /Users/rob/Development/src/FooApp/build/Release 64-bit/FooApp.app/Contents/MacOS/FooApp Identifier: com.companyX.FooApp Version: 0.9 (build d540e05) (2) Code Type: PPC (Translated) Parent Process: launchd [708] Date/Time: 2010-04-09 18:32:23.962 +1000 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_CRASH (SIGTRAP) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 ...snip non-relevant threads... Thread 5 Crashed: 0 libSystem.B.dylib 0x8023656a __pthread_kill + 10 1 libSystem.B.dylib 0x80235e17 pthread_kill + 95 2 com.companyX.FooApp 0xb80bfb30 0xb8000000 + 785200 3 com.companyX.FooApp 0xb80c0037 0xb8000000 + 786487 4 com.companyX.FooApp 0xb80dd8e8 0xb8000000 + 907496 5 com.companyX.FooApp 0xb8145397 spin_lock_wrapper + 1791 6 com.companyX.FooApp 0xb801ceb7 0xb8000000 + 118455 I have used the Apple docs on debugging translated apps and the information on this page to attach gdb to the app when it's running in Rosetta. The app immediately breaks into the debugger upon launch: Program received signal SIGTRAP, Trace/breakpoint trap. [Switching to thread 15107] 0x9151fdd4 in auto_fatal () (gdb) bt #0 0x9151fdd4 in auto_fatal () #1 0x91536d84 in Auto::Thread::get_register_state () #2 0x915372f8 in Auto::Thread::scan_other_thread () #3 0x91529be4 in Auto::Zone::scan_registered_threads () #4 0x91539114 in Auto::MemoryScanner::scan_thread_ranges () #5 0x9153b000 in Auto::MemoryScanner::scan () #6 0x9153049c in Auto::Zone::collect () #7 0x915198f4 in auto_collect_internal () #8 0x9151a094 in auto_collection_work () #9 0x96687434 in _dispatch_call_block_and_release () #10 0x9668912c in _dispatch_queue_drain () #11 0x96689350 in _dispatch_queue_invoke () #12 0x966895c0 in _dispatch_worker_thread2 () #13 0x966896fc in _dispatch_worker_thread () #14 0x965a97e8 in _pthread_body () (gdb) I have no idea where to start with this. It looks like the Garbage Collector is failing very badly. Are garbage-collected PowerPC apps not supported in Rosetta? I can't see any mention of this limitation in the docs if so. Does anyone have any ideas?

    Read the article

  • Marshalling a big-endian byte collection into a struct in order to pull out values

    - by Pat
    There is an insightful question about reading a C/C++ data structure in C# from a byte array, but I cannot get the code to work for my collection of big-endian (network byte order) bytes. (EDIT: Note that my real struct has more than just one field.) Is there a way to marshal the bytes into a big-endian version of the structure and then pull out the values in the endianness of the framework (that of the host, which is usually little-endian)? This should summarize what I'm looking for (LE=LittleEndian, BE=BigEndian): void Main() { var leBytes = new byte[] {1, 0}; var beBytes = new byte[] {0, 1}; Foo fooLe = ByteArrayToStructure<Foo>(leBytes); Foo fooBe = ByteArrayToStructureBigEndian<Foo>(beBytes); Assert.AreEqual(fooLe, fooBe); } [StructLayout(LayoutKind.Explicit, Size=2)] public struct Foo { [FieldOffset(0)] public ushort firstUshort; } T ByteArrayToStructure<T>(byte[] bytes) where T: struct { GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); T stuff = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(),typeof(T)); handle.Free(); return stuff; } T ByteArrayToStructureBigEndian<T>(byte[] bytes) where T: struct { ??? }

    Read the article

  • How do I expose the columns collection of GridView control that is inside a user control

    - by Christopher Edwards
    See edit. I want to be able to do this in the aspx that consumes the user control. <uc:MyControl ID="MyGrid" runat="server"> <asp:BoundField DataField="FirstColumn" HeaderText="FirstColumn" /> <asp:BoundField DataField="SecondColumn" HeaderText="SecondColumn" /> </uc> I have this code (which doesn't work). Any ideas what I am doing wrong? VB Partial Public Class MyControl Inherits UserControl <System.Web.UI.IDReferenceProperty(GetType(DataControlFieldCollection))> _ Public Property Columns() As DataControlFieldCollection Get Return MyGridView.Columns End Get Set(ByVal value As DataControlFieldCollection) ' The Columns collection of the GridView is ReadOnly, so I rebuild it MyGridView.Columns.Clear() For Each c As DataControlField In value MyGridView.Columns.Add(c) Next End Set End Property ... End Class C# public partial class MyControl : UserControl {         [System.Web.UI.IDReferenceProperty(typeof(DataControlFieldCollection))]     public DataControlFieldCollection Columns {         get { return MyGridView.Columns; }         set {             MyGridView.Columns.Clear();             foreach (DataControlField c in value) {                 MyGridView.Columns.Add(c);             }         }     } ... } EDIT: Actually it does work, but auto complete does not work between the uc:MyControl opening and closing tags and I get compiler warnings:- Content is not allowed between the opening and closing tags for element 'MyControl'. Validation (XHTML 1.0 Transitional): Element 'columns' is not supported. Element 'BoundField' is not a known element. This can occur if there is a compilation error in the Web site, or the web.config file is missing. So I guess I need to use some sort of directive to tell the complier to expect content between the tags. Any ideas?

    Read the article

  • Java Collections and Garbage Collector

    - by Anth0
    A little question regarding performance in a Java web app. Let's assume I have a List<Rubrique> listRubriques with ten Rubrique objects. A Rubrique contains one list of products (List<product> listProducts) and one list of clients (List<Client> listClients). What exactly happens in memory if I do this: listRubriques.clear(); listRubriques = null; My point of view would be that, since listRubriques is empty, all my objects previously referenced by this list (including listProducts and listClients) will be garbage collected pretty soon. But since Collection in Java are a little bit tricky and since I have quite performance issues with my app i'm asking the question :) edit : let's assume now that my Client object contains a List<Client>. Therefore, I have kind of a circular reference between my objects. What would happen then if my listRubrique is set to null? This time, my point of view would be that my Client objects will become "unreachable" and might create a memory leak?

    Read the article

  • mocking collection behavior with Moq

    - by Stephen Patten
    Hello, I've read through some of the discussions on the Moq user group and have failed to find an example and have been so far unable to find the scenario that I have. Here is my question and code: // 6 periods var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; // Now the proxy is correct with the schedule helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>(), schedule)); Then in my tests I use Periods but the Mocked _PaymentPlanHelper never populates the collection, see below for usage: public IEnumerable<PaymentPlanPeriod> Periods { get { if (CanCalculateExpression()) _PaymentPlanHelper.GetPlanPeriods(this.ToString(), _PaymentSchedule); return _PaymentSchedule; } } Now if I change the mocked object to use another overloaded method of GetPlanPeriods that returns a List like so : var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>())).Returns(new List<PaymentPlanPeriod>(schedule)); List<PaymentPlanPeriod> result = new _PaymentPlanHelper.GetPlanPeriods(this.ToString()); This works as expected. Any pointers would be awesome, as long as you don't bash my architecture... :) Thank you, Stephen

    Read the article

  • Grouping Collection seperating numeric 5 from String "5"

    - by invertedSpear
    BackGround: I have an advanced data grid. The data provider for this ADG is an ArrayCollection. There is a grouping collection on an ID field of this AC. Example of a couple items within this AC the AC var name is "arcTemplates": (mx.collections::ArrayCollection)#0 filterFunction = (null) length = 69 list = (mx.collections::ArrayList)#1 length = 69 source = (Array)#2 [0] (Object)#3 abbreviation = "sore-throat" insertDate = "11/16/2009" name = "sore throat" templateID = 234 templateType = "New Problem" templateTypeID = 1 [32] (Object)#35 abbreviation = 123 insertDate = "03/08/2010" name = 123 templateID = 297 templateType = "New Problem" templateTypeID = 1 [55] (Object)#58 abbreviation = 1234 insertDate = "11/16/2009" name = 1234 templateID = 227 templateType = "Exam" templateTypeID = 5 [56] (Object)#59 abbreviation = "breast only" insertDate = "03/15/2005" name = "breast exam" templateID = 195 templateType = "Exam" templateTypeID = 5 Example of Flex code leading to the Grouping: <mx:AdvancedDataGrid displayItemsExpanded="true" id="gridTemplates"> <mx:dataProvider> <mx:GroupingCollection id="gc" source="{arcTemplates}"> <mx:Grouping > <mx:GroupingField name="templateTypeID" compareFunction="gcSort"> GC sort function: public function gcSort(a:Object, b:Object):int{ return ObjectUtil.stringCompare(String(a.templateTypeID + a.name).toLowerCase(), String(b.templateTypeID + b.name).toLowerCase()); } Problem: In my AC example there are a few items, items 0, 32 and 56 properly sort and group to their templateTypeID, but item 55 does something weird. It seems to sort/group on the numeric 5 instead of the string "5". Gets stranger. If I change the name property to contain text (so 1234x) it then correctly sorts/groups to the string "5" Question: What is going on here and how do I fix it?

    Read the article

  • Can memory be cleaned up?

    - by Tom
    I am working in Delphi 5 (with FastMM installed) on a Win32 project, and have recently been trying to drastically reduce the memory usage in this application. So far, I have cut the usage nearly in half, but noticed something when working on a separate task. When I minimized the application, the memory usage shrunk from 45 megs down to 1 meg, which I attributed to it paging out to disk. When I restored it and restarted working, the memory went up only to 15 megs. As I continued working, the memory usage slowly went up again, and a minimize and restore flushed it back down to 15 megs. So to my thinking, when my code tells the system to release the memory, it is still being held on to according to Windows, and the actual garbage collection doesn't kick in until a lot later. Can anyone confirm/deny this sort of behavior? Is it possible to get the memory cleaned up programatically? If I keep using the program without doing this manual flush, I get an out of memory error after a while, and would like to eliminate that. Thanks.

    Read the article

  • WCF Callback Contract InvalidOperationException: Collection has been modified

    - by mrlane
    We are using a WCF service with a Callback contract. Communication is Asynchronous. The service contract is defined as: [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(ISessionClient),SessionMode = SessionMode.Allowed)] public interface ISessionService With a method: [OperationContract(IsOneWay = true)] void Send(Message message); The callback contract is defined as [ServiceContract] public interface ISessionClient With methods: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginSend(Message message, AsyncCallback callback, object state); void EndSend(IAsyncResult result); The implementation of BeginSend and EndSend in the callback channel are as follows: public void Send(ActionMessage actionMessage) { Message message = Message.CreateMessage(_messageVersion, CommsSettings.SOAPActionReceive, actionMessage, _messageSerializer); lock (LO_backChannel) { try { _backChannel.BeginSend(message, OnSendComplete, null); } catch (Exception ex) { _hasFaulted = true; } } } private void OnSendComplete(IAsyncResult asyncResult) { lock (LO_backChannel) { try { _backChannel.EndSend(asyncResult); } catch (Exception ex) { _hasFaulted = true; } } } We are getting an InvalidOperationException: "Collection has been modified" on _backChannel.EndSend(asyncResult) seemingly randomly, and we are really out of ideas about what is causing this. I understand what the exception means, and that concurrency issues are a common cause of such exceptions (hence the locks), but it really doesn't make any sense to me in this situation. The clients of our service are Silverlight 3.0 clients using PollingDuplexHttpBinding which is the only binding available for Silverlight. We have been running fine for ages, but recently have been doing a lot of data binding, and this is when the issues started. Any help with this is appreciated as I am personally stumped at this time.

    Read the article

  • Generics vs inheritance (whenh no collection classes are involved)

    - by Ram
    This is an extension of this questionand probably might even be a duplicate of some other question(If so, please forgive me). I see from MSDN that generics are usually used with collections The most common use for generic classes is with collections like linked lists, hash tables, stacks, queues, trees and so on where operations such as adding and removing items from the collection are performed in much the same way regardless of the type of data being stored. The examples I have seen also validate the above statement. Can someone give a valid use of generics in a real-life scenario which does not involve any collections ? Pedantically, I was thinking about making an example which does not involve collections public class Animal<T> { public void Speak() { Console.WriteLine("I am an Animal and my type is " + typeof(T).ToString()); } public void Eat() { //Eat food } } public class Dog { public void WhoAmI() { Console.WriteLine(this.GetType().ToString()); } } and "An Animal of type Dog" will be Animal<Dog> magic = new Animal<Dog>(); It is entirely possible to have Dog getting inherited from Animal (Assuming a non-generic version of Animal)Dog:Animal Therefore Dog is an Animal Another example I was thinking was a BankAccount. It can be BankAccount<Checking>,BankAccount<Savings>. This can very well be Checking:BankAccount and Savings:BankAccount. Are there any best practices to determine if we should go with generics or with inheritance ?

    Read the article

  • Please help us non-C++ developers understand what RAII is

    - by Charlie Flowers
    Another question I thought for sure would have been asked before, but I don't see it in the "Related Questions" list. Could you C++ developers please give us a good description of what RAII is, why it is important, and whether or not it might have any relevance to other languages? I do know a little bit. I believe it stands for "Resource Acquisition is Initialization". However, that name doesn't jive with my (possibly incorrect) understanding of what RAII is: I get the impression that RAII is a way of initializing objects on the stack such that, when those variables go out of scope, the destructors will automatically be called causing the resources to be cleaned up. So why isn't that called "using the stack to trigger cleanup" (UTSTTC:)? How do you get from there to "RAII"? And how can you make something on the stack that will cause the cleanup of something that lives on the heap? Also, are there cases where you can't use RAII? Do you ever find yourself wishing for garbage collection? At least a garbage collector you could use for some objects while letting others be managed? Thanks.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >