Search Results

Search found 12107 results on 485 pages for 'pinned objects'.

Page 399/485 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Virtual properties duplicated during serialization when XmlElement attribute used

    - by Laramie
    The Goal: XML serialize an object that contains a list of objects of that and its derived types. The resulting XML should not use the xsi:type attribute to describe the type, to wit the names of the serialized XML elements would be an assigned name specific to the derived type, not always that of the base class, which is the default behavior. The Attempt: After exploring IXmlSerializable and IXmlSerializable with eerie XmlSchemaProvider methods and voodoo reflection to return specialized schemas and an XmlQualifiedName over the course of days, I found I was able to use the simple [XmlElement] attribute to accomplish the goal... almost. The Problem: Overridden properties appear twice when serializing. The exception reads "The XML element 'overriddenProperty' from namespace '' is already present in the current scope. Use XML attributes to specify another XML name or namespace for the element." I attempted using a *Specified property (see code), but it didn't work. Sample Code: Class Declaration using System; using System.Collections.Generic; using System.Xml.Serialization; [XmlInclude(typeof(DerivedClass))] public class BaseClass { public BaseClass() { } [XmlAttribute("virt")] public virtual string Virtual { get; set; } [XmlIgnore] public bool VirtualSpecified { get { return (this is BaseClass); } set { } } [XmlElement(ElementName = "B", Type = typeof(BaseClass), IsNullable = false)] [XmlElement(ElementName = "D", Type = typeof(DerivedClass), IsNullable = false)] public List<BaseClass> Children { get; set; } } public class DerivedClass : BaseClass { public DerivedClass() { } [XmlAttribute("virt")] public override string Virtual { get { return "always return spackle"; } set { } } } Driver: BaseClass baseClass = new BaseClass() { Children = new List<BaseClass>() }; BaseClass baseClass2 = new BaseClass(){}; DerivedClass derivedClass1 = new DerivedClass() { Children = new List<BaseClass>() }; DerivedClass derivedClass2 = new DerivedClass() { Children = new List<BaseClass>() }; baseClass.Children.Add(derivedClass1); baseClass.Children.Add(derivedClass2); derivedClass1.Children.Add(baseClass2); I've been wrestling with this on and off for weeks and can't find the answer anywhere.

    Read the article

  • Divide a path into N sections using Java or PostgreSQL/PostGIS

    - by Guido
    Imagine a GPS tracking system that is following the position of several objects. The points are stored in a database (PostgreSQL + PostGIS). Each path is composed by a different number of points. That is the reason why, in order to compare a pair of paths, I need to divide every path in a set of 100 points. Do you know any PostGIS function that already implement this algorithm? I've not been able to find it. If not, I'd like to solve it using Java. In this case I'd like to know an efficient and easy to implement algorithm to divide a path into N points. The most simple example could be to divide this path into three points: position 1 : x=1, y=2 position 2 : x=1, y=3 And the result should be: position 1 : x=1, y=2 (starting point) position 2 : x=5, y=2.5 position 3 : x=9, y=3 (end point) Edit: By 'compare a pair of paths' I mean to calculate the distance between two paths. I plan to divide each path in 100 points, and sum the euclidean distance between each one of these points as the distance between the two paths.

    Read the article

  • What's the order of execution when using IDataErrorInfo?

    - by Benny Jobigan
    Many times with WPF, we use INotifyPropertyChanged and IDataErrorInfo to enable binding and validation on our data objects. I've got a lot of properties that look like this: public SomeObject SomeData { get { return _SomeData; } set { _SomeData = value; OnPropertyChanged("SomeData"); } } Of course, I have an appropriate overridden IDataErrorInfo.this[] in my class to do validation. Question 1) In a binding situation, what happens? For example: User enters new data. Binding writes data to property. Property set method is executed. Binding checks this[] for validation. If the data is invalid, the binding sets the property back to the old value. Property set method is executed again. This is important if you are adding "hooks" into the set method, like: public string PathToFile { get { return _PathToFile; } set { if (_PathToFile != value && // prevent unnecessary actions OnPathToFileChanging(value)) // allow subclasses to do something or stop the setter { _PathToFile = value; OnPathToFileChanged(); // allow subclasses to do something afterwards OnPropertyChanged("PathToFile"); } } }

    Read the article

  • CStdioFile Undeclared Identifier

    - by Eric Regnier
    I am unable to compile my code when using a CStdioFile type. I have included the afx.h header file but I am still receiving errors. What I am trying to do is write a cstring that contains an entire xml document to file. This cstring contains comments inside of it, but when I use other objects such as wofstream to write it to file, these comments are striped out for some reason. So that is why I am trying to write this to file using CStdioFile now. If anyone can help me out as to why I cannot compile my code using CStdioFile, or any other way to write a cstring that contains xml comments to file, please let me know! Below is my code using CStdioFile: CStdioFile xmlFile; xmlFile.Open( "c:\\test.txt", CFile::modeCreate | CFile::modeWrite | CFile::typeText ); xmlFile.WriteString( m_sData ); xmlFile.Close(); And my errors: error C2065: 'CStdioFile' : undeclared identifier error C2065: 'modeCreate' : undeclared identifier error C2065: 'modeWrite' : undeclared identifier error C2065: 'typeText' : undeclared identifier error C2065: 'xmlFile' : undeclared identifier error C2146: syntax error : missing ';' before identifier 'xmlFile' error C2228: left of '.Close' must have class/struct/union type type is ''unknown-type'' error C2228: left of '.Open' must have class/struct/union type type is ''unknown-type'' error C2228: left of '.WriteString' must have class/struct/union type type is ''unknown-type'' error C2653: 'CFile' : is not a class or namespace name error C2653: 'CFile' : is not a class or namespace name error C2653: 'CFile' : is not a class or namespace name error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • Django: Overriding the save() method: how do I call the delete() method of a child class

    - by Patti
    The setup = I have this class, Transcript: class Transcript(models.Model): body = models.TextField('Body') doPagination = models.BooleanField('Paginate') numPages = models.PositiveIntegerField('Number of Pages') and this class, TranscriptPages(models.Model): class TranscriptPages(models.Model): transcript = models.ForeignKey(Transcript) order = models.PositiveIntegerField('Order') content = models.TextField('Page Content', null=True, blank=True) The Admin behavior I’m trying to create is to let a user populate Transcript.body with the entire contents of a long document and, if they set Transcript.doPagination = True and save the Transcript admin, I will automatically split the body into n Transcript pages. In the admin, TranscriptPages is a StackedInline of the Transcript Admin. To do this I’m overridding Transcript’s save method: def save(self): if self.doPagination: #do stuff super(Transcript, self).save() else: super(Transcript, self).save() The problem = When Transcript.doPagination is True, I want to manually delete all of the TranscriptPages that reference this Transcript so I can then create them again from scratch. So, I thought this would work: #do stuff TranscriptPages.objects.filter(transcript__id=self.id).delete() super(Transcript, self).save() but when I try I get this error: Exception Type: ValidationError Exception Value: [u'Select a valid choice. That choice is not one of the available choices.'] ... and this is the last thing in the stack trace before the exception is raised: .../django/forms/models.py in save_existing_objects pk_value = form.fields[pk_name].clean(raw_pk_value) Other attempts to fix: t = self.transcriptpages_set.all().delete() (where self = Transcript from the save() method) looping over t (above) and deleting each item individually making a post_save signal on TranscriptPages that calls the delete method Any ideas? How does the Admin do it? UPDATE: Every once in a while as I'm playing around with the code I can get a different error (below), but then it just goes away and I can't replicate it again... until the next random time. Exception Type: MultiValueDictKeyError Exception Value: "Key 'transcriptpages_set-0-id' not found in " Exception Location: .../django/utils/datastructures.py in getitem, line 203 and the last lines from the trace: .../django/forms/models.py in _construct_form form = super(BaseInlineFormSet, self)._construct_form(i, **kwargs) .../django/utils/datastructures.py in getitem pk = self.data[pk_key]

    Read the article

  • Cross-platform and language (de)serialization

    - by fwgx
    I'm looking for a way to serialize a bunch of C++ structs in the most convenient way so that the serialization is portable across C++ and Java (at a minimum) and across 32bit/64bit, big/little endian platforms. The structures to be serialized just contain data, i.e. they're pure data objects with no state or behavior. The idea being that we serialize the structs into an octet blob that we can store in a database "generically" and be read out later on. Thus avoiding changing the database whenever a struct changes and also avoiding assigning each data member to a field - i.e. we only want one table to hold everything "generically" as a binary blob. This should make less work for developers and require less changes when structures change. I've looked at boost.serialize but don't think there's a way to enable compatibility with Java. And likewise for inheriting Serializable in Java. If there is a way to do it by starting with an IDL file that would be best as we already have IDL files that describe the structures. Cheers in advance!

    Read the article

  • Silverlight Toolkit ListBoxDragTarget Only One Way

    - by Tom Allen
    I've got a Silverlight 3 app that has two ListBoxes that I need to be able to drag items between. I've got the toolkit control working so that I can drag from ListBox lbA to ListBox lbB but i then can't drag the item from lbB back to lbA. <toolkit:ListBoxDragDropTarget mswindows:DragDrop.AllowDrop="True"> <ListBox x:Name="lbA" Style="{StaticResource ListBoxStyle}"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding DisplayMember}"/> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </toolkit:ListBoxDragDropTarget> <toolkit:ListBoxDragDropTarget mswindows:DragDrop.AllowDrop="True"> <ListBox x:Name="lbB" Style="{StaticResource ListBoxStyle}" Width="350"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding DisplayMember}"/> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </toolkit:ListBoxDragDropTarget> I'm binding lbA to an ObservableCollection of MyObject items which is a parent class for objects MyObjectA and MyObjectB (and there is a mix of ChildObjectA and ChildObjectB items). Is the one way behaviour I'm seeing due to binding to a collection MyObjects or something else i'm missing?

    Read the article

  • What's better practice: objc_msgSendv or NSInvocation?

    - by Jared P
    So I'm working on something in obj-c (I'd rather not say what) where I need to be able to call arbitrary methods on arbitrary objects with arbitrary variables. The first two are easy enough to do, but I am unsure how to do the variable arguments. To be clear, this is not about a function/method receiving variable arguments, but about sending them. I have found two ways to do this: objc_msgSendv (and its variants) in the objective-c runtime, and NSInvocation. NSInvocation seems easier and more like it's the 'best practice', but objc_msgSendv sounds like it should be faster, and I need to do this many, many times over, with completely different messages each time. Which one should I choose? Is objc_msgSendv taboo for a good reason? (the docs say not to call the objc_msgsend functions.) P.S. I know the types of all the arguments, and not all of them are id-s Also, (not part of the main question,) there doesn't appear to be a way to message super from objc_msgSendv, but there doesn't seem to be a way to do that in NSInvocation either, so any help on that would be great too.

    Read the article

  • NSFetchedResultsController: changing predicate not working?

    - by icerelic
    Hi, I'm writing an app with two tables on one screen. The left table is a list of folders and the right table shows a list of files. When tapped on a row on the left, the right table will display the files belonging to that folder. I'm using Core Data for storage. When the selection of folder changes, the fetch predicate of the right table's NSFetchedResultsController will change and perform a new fetch, then reload the table data. I used the following code snippet: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"list = %@",self.list]; [fetchedResultsController.fetchRequest setPredicate:predicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } [table reloadData]; However the fetch results are still the same. I've NSLog'ed "predicate" before and after the fetch, and they were correct with updated information. The fetch results stay the same as initial fetch (when view is loaded). I'm not very familiar with the way Core Data fetches objects (is there a caching system?), but I've done similar things before(changing predicates, re-fetching data, and refreshing table) with single table views and everything went well. If someone could gave me a hint I would be very appreciated. Thanks in advance.

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • Hazelcast Distributed Executor Service KeyOwner

    - by János Veres
    I have problem understanding the concept of Hazelcast Distributed Execution. It is said to be able to perform the execution on the owner instance of a specific key. From Documentation: <T> Future<T> submitToKeyOwner(Callable<T> task, Object key) Submits task to owner of the specified key and returns a Future representing that task. Parameters: task - task key - key Returns: a Future representing pending completion of the task I believe that I'm not alone to have a cluster built with multiple maps which might actually use the same key for different purposes, holding different objects (e.g. something along the following setup): IMap<String, ObjectTypeA> firstMap = HazelcastInstance.getMap("firstMap"); IMap<String, ObjectTypeA_AppendixClass> secondMap = HazelcastInstance.getMap("secondMap"); To me it seems quite confusing what documentation says about the owner of a key. My real frustration is that I don't know WHICH - in which map - key does it refer to? The documentation also gives a "demo" of this approach: import com.hazelcast.core.Member; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.IExecutorService; import java.util.concurrent.Callable; import java.util.concurrent.Future; import java.util.Set; import com.hazelcast.config.Config; public void echoOnTheMemberOwningTheKey(String input, Object key) throws Exception { Callable<String> task = new Echo(input); HazelcastInstance hz = Hazelcast.newHazelcastInstance(); IExecutorService executorService = hz.getExecutorService("default"); Future<String> future = executorService.submitToKeyOwner(task, key); String echoResult = future.get(); } Here's a link to the documentation site: Hazelcast MultiHTML Documentation 3.0 - Distributed Execution Did any of you guys figure out in the past what key does it want?

    Read the article

  • Strange EListError occurance (when accessing variable-defined index)

    - by michal
    Hi, I have a TList which stores some objects. Now I have a function which does some operations on that list: function SomeFunct(const AIndex: integer): IInterface begin if (AIndex > -1) and (AIndex < fMgr.Windows.Count ) then begin if (fMgr.Windows[AIndex] <> nil) then begin if not Supports(TForm(fMgr.Windows[AIndex]), IMyFormInterface, result) then result:= nil; end; end else result:= nil; end; now, what is really strange is that accessing fMgr.Windows with any proper index causes EListError... However if i hard-code it (in example, replace AIndex with value 0 or 1) it works fine. I tried debugging it, the function gets called twice, with arguments 0 and 1 (as supposed). while AIndex = 0, evaluating fMgr.Windows[AIndex] results in EListError at $someAddress, while evaluating fMgr.Windws[0] instead - returns proper results ... what is even more strange, even though there is an EListError, the function returns proper data ... and doesn't show anything. Just info on two EListError memory leaks on shutdown (using FastMM) any ideas what could be wrong?! Thanks in advance michal

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • PHP - "Fat Free Framework" Find Methods and Showing Results in Template

    - by user1672808
    Just started trying the "Fat Free Framework" I'm building a site using a MySQL DB with 265 fields, and 5000+ rows in the DB; I can load() a specific record easily, no problems. When using find(), afind(), and even "select()", template will show blank lines or lines with "filler" text, with the correct number of rows for the query results, but no text/data from the DB itself; Same problem whether using objects or simply arrays from result (afind() and find()). I've copied/pasted the code verbatim from examples and from documentation, with only the DB specific items changed. Still, no luck. CODE IN PHP FILE (function from CLASS): static function home() { $featured=new Axon('boats'); $F3::set('boatlist',$featured->afind('D_CustomerID=173')); F3::set('content',TEMPLATE_DIR .'/home.html'); echo Template::serve(TEMPLATE_DIR .'/layout.html'); } TEMPLATE home.html: <div class="span8"> <h3> Featured Boats </h3> <F3:repeat group="{{@boatlist}}" value="{{@boat}}"> <div style="margin-left: 2em" class="thumbnails"> <p> <a href="boat/{{@boat['D_BoatNum']}}">{{trim(@boat['D_Description'])}}</a> by {{@boat['D_CustomerID']}} </p> <p> {{@boat['D_Price']}} </p> </div> </F3:repeat> </div> The number of rows this produces coincides with the correct number of rows in the DB. However, the actual data from each field does not show. Any ideas?

    Read the article

  • Getting an "i" from GEvent

    - by Cosizzle
    Hello, I'm trying to add an event listener to each icon on the map when it's pressed. I'm storing the information in the database and the value that I'm wanting to retrive is "i" however when I output "i", I get it's last value which is 5 (there are 6 objects being drawn onto the map) Below is the code, what would be the best way to get the value of i, and not the object itself. var drawLotLoc = function(id) { var lotLoc = new GIcon(G_DEFAULT_ICON); // create icon object lotLoc.image = url+"images/markers/lotLocation.gif"; // set the icon image lotLoc.shadow = ""; // no shadow lotLoc.iconSize = new GSize(24, 24); // set the size var markerOptions = { icon: lotLoc }; $.post(opts.postScript, {action: 'drawlotLoc', id: id}, function(data) { var markers = new Array(); // lotLoc[x].description // lotLoc[x].lat // lotLoc[x].lng // lotLoc[x].nighbourhood // lotLoc[x].lot var lotLoc = $.evalJSON(data); for(var i=0; i<lotLoc.length; i++) { var spLat = parseFloat(lotLoc[i].lat); var spLng = parseFloat(lotLoc[i].lng); var latlng = new GLatLng(spLat, spLng) markers[i] = new GMarker(latlng, markerOptions); myMap.addOverlay(markers[i]); GEvent.addListener(markers[i], "click", function() { console.log(i); // returning 5 in all cases. // I _need_ this to be unique to the object being clicked. console.log(this); }); } });

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • using the ASP.NET Caching API via method annotations in C#

    - by craigmoliver
    In C#, is it possible to decorate a method with an annotation to populate the cache object with the return value of the method? Currently I'm using the following class to cache data objects: public class SiteCache { // 7 days + 6 hours (offset to avoid repeats peak time) private const int KeepForHours = 174; public static void Set(string cacheKey, Object o) { if (o != null) HttpContext.Current.Cache.Insert(cacheKey, o, null, DateTime.Now.AddHours(KeepForHours), TimeSpan.Zero); } public static object Get(string cacheKey) { return HttpContext.Current.Cache[cacheKey]; } public static void Clear(string sKey) { HttpContext.Current.Cache.Remove(sKey); } public static void Clear() { foreach (DictionaryEntry item in HttpContext.Current.Cache) { Clear(item.Key.ToString()); } } } In methods I want to cache I do this: [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var ck = string.Format("SiteSettings_SelectOne_Name-Name_{0}-", Name.ToLower()); var dt = (DataTable)SiteCache.Get(ck); if (dt == null) { dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); SiteCache.Set(ck, dt); } var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; } Is it possible to separate those concerns like so: (notice the new annotation) [CacheReturnValue] [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; }

    Read the article

  • NSMutableObject from existing custom class

    - by A.S.
    Hello there. I have an existing class that has methods to deserialise from XML in my code. Now I need to create correct CoreData model from that class. It's objects will be created not only from CoreData storage but also by deserializing XML (somehow like instance->title = [[NSString stringWithUTF8String: (const char *)subNode->children->content] retain; ) without saving to CoreData, and sometimes I need to save it. What is the correct steps to modify existing class to do that except of adding CoreData framework and making my class an NSManagedObject instead of NSObject? Class sample: @interface TSTSong : NSManagedObject<NTSerializableObject> { NSString *identifier; NSString *title; float length; NSURL *previewURL; NSString *author; NSURL *coverURL; NSString *appStoreId; BOOL isPurchased; NSURL *bannerURL; NSDecimalNumber *priceValue; NSLocale *priceLocale; } P.S. I'm noob, so I'f I'm doing smth. wrong - please let me know. Sorry for my english.

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • Email Collector / Implementation

    - by Tian
    I am implementing a simple RoR webpage that collect emails from visitors and store them as objects. I'm using it as a mini-project to try RoR and BDD. I can think of 3 features for Cucumber: 1. User submits a valid email address 2. User submits an existing email address 3. User submits an invalid email My question is, for scenarios 2 and 3, is it better to handle this via the controller? or as methods in a class? Perhaps something that throws errors if an instance is instantiated in sceanrio 2 or 3? Implementation is below, love to hear some code reviews in addition to answers to questions above. Thanks! MODEL: class Contact < ActiveRecord::Base attr_accessor :email end VIEW: <h1>Welcome To My Experiment</h1> <p>Find me in app/views/welcome/index.html.erb</p> <%= flash[:notice] %> <% form_for @contact, :url => {:action => "index"} do |f| %> <%= f.label :email %><br /> <%= f.text_field :email %> <%= submit_tag 'Submit' %> <% end %> CONTROLLER: class WelcomeController < ApplicationController def index @contact = Contact.new unless params[:contact].nil? @contact = Contact.create!(params[:contact]) flash[:notice] = "Thank you for your interest, please check your mailbox for confirmation" end end end

    Read the article

  • Get notified when objective-c dom updates are ready initiated from webview

    - by Josh
    I am programmatically updating the DOM on a WebKit webview via DOMElement objects - for complex UI, there is usually a javascript component to any element that I add. This begs the need to be notified when the updates complete -- is there any such event? I know regular divs dont fire an onload or onready event, correct? The other assumption I had, which may be totally wrong, is that DOMElement updates aren't synchronous, so I can't do something like this and be confident it will meet with the label actually in the DOM: DOMElement *displayNameLabel = [document createElement:@"div"]; [displayNameLabel setAttribute:@"class" value:@"user-display-name"]; [displayNameLabel setTextContent:currentAvatar.avatarData.displayName]; [[document getElementById:@"user-card-content"] appendChild:displayNameLabel]; id win = [webView windowScriptObject]; [win evaluateWebScript:@"javascriptInstantiateLabel()"]; Is this true? I am using jquery within the body of the html already, so I don't mind subscribing to a particular classes set of events via "live." I thought I might be able to do something like: $(".some-class-to-be-added-later").live( "ready", function(){ // Instantiate some fantastic javascript here for .some-class }); but have experienced no joy so far. Is there any way on either side (objective-c since I don't programmatically firing javascript post load, or javascript) to be notified when the element is in the DOM? Thanks, Josh

    Read the article

  • Considerations when architecting an application using Dependency Injection

    - by Dan Bryant
    I've begun experimenting with dependency injection (in particular, MEF) for one of my projects, which has a number of different extensibility points. I'm starting to get a feel for what I can do with MEF, but I'd like to hear from others who have more experience with the technology. A few specific cases: My main use case at the moment is exposing various singleton-like services that my extensions make use of. My Framework assembly exposes service interfaces and my Engine assembly contains concrete implementations. This works well, but I may not want to allow all of my extensions to have access to all of my services. Is there a good way within MEF to limit which particular imports I allow a newly instantiated extension to resolve? This particular application has extension objects that I repeatedly instantiate. I can import multiple types of Controllers and Machines, which are instantiated in different combinations for a Project. I couldn't find a good way to do this with MEF, so I'm doing my own type discovery and instantiation. Is there a good way to do this within MEF or other DI frameworks? I welcome input on any other things to watch out for or surprising capabilities you've discovered that have changed the way you architect.

    Read the article

  • SubSonic 3 screws up selecteditem?

    - by SteveCav
    If you have a moment, please try this: -Download Subsonic 3. -Start a new proj and add SS's ActiveRecord templates. -Point it to any SQL Server DB and generate the classes. -Add a WPF project. -Create a window and add a combobox or listbox. -Set the ItemsSource from the SS DAL, and format it how you wish. -Add a button that will show you some value from the SelectedItem (using messagebox, console, whatever). -Run the project. -Click on the third item in the list. -Click on the second item in the list. -Click on the button. When I do that, the button gives me the value of the THIRD item, not the second. In other words, once the SelectedItem is set the first time it STAYS, no matter which item is subsequently highlighted on the screen. This is happening to me whatever control I use (combobox, listbox, even datagrid) and it ONLY happens with Subsonic Activerecord objects. If I write my own POCOs with identical properties and bind a list of them instead, the controls behave as expected. Does this happen to you? Any ideas?

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >