Search Results

Search found 6255 results on 251 pages for 'wcf behaviour'.

Page 225/251 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • PHP weirdness extending IMagick class

    - by Jamie Carl
    This is a really weird one. I have some code that is happily working on version 2.1.1RC1 of the php5-imagick module. It's basically just a class I wrote that extends the Imagick class and manages images stored in a database. Since upgrading to version 3.0.0RC1 (thankfully only on my dev box) things have gone to hell. It seems that object members are writeable but are NOT readable. Take the following sample code: class db_image extends IMagick { private $data; function __construct( $id = null ){ parent::__construct(); $this->data = 'some plain text'; echo $this->data; } This will output absolutely NOTHING. My debugger indicates that the contents of $this-data are the correct string value, but I am unable to read the value back out of the member variable. Seriously. WTF? Does anyone know what is causing this or has seen it before? I don't even know how to replicate this behaviour in my own classes.

    Read the article

  • Semantically correct XHTML markup

    - by Dori
    Hello all. Just trying to get the hang of using the semantically correct XHTML markup. Just writing the code for a small navigation item. Where each button has effectivly a title and a descrption. I thought a definition list would therefore be great so i wrote the following <dl> <dt>Import images</dt> <dd>Read in new image names to database</dd> <dt>Exhibition Management</dt> <dd>Create / Delete an exhibition </dd> <dt>Image Management</dt> <dd>Edit name, medium and exhibition data </dd> </dl> But...I want the above to be 3 buttons, each button containing the dt and dd text. How can i do this with the correct code? Normally i would make each button a div and use that for the visual button behaviour (onHover and current page selection stuff). Any advice please Thanks

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • ScriptManager duplicates javascript

    - by Andreas
    Hi! I have a usercontrol that can be used in for example a gridview itemtemplate, this means that the control might or might not be on the page at page load. In the case where the control is inside an itemtemplate i will popupate the gridview via asyncronous postbacks (via updatepanels). The control itselfs registrers scriptblocks since it is depending on javascripts. First i used Page.ClientScript.RegistrerClientScriptBlock But this doesn't work on asyncronous postbacks (updatepanels) so i then tried the same using ScriptManager which allows me to registrer scripts on the page after async postbacks. great!. ScriptManager.RegisterClientScriptBlock However, ScriptManager (what i know of) does not have the functionallity to see if a script already is on the page, so i will for every postback generate duplicates of the script blocks, this is ofcourse unwanted behaviour. I did a run at Google and found that i can call the Dispose() method of the PageRequestManager can be used, this does work since it clears the scripts and then adding them again (this also solves my issue with removing unused script blocks from removed controls). Sys.WebForms.PageRequestManager.getInstance().Dispose() However, ofcourse there is a downside since im posting here :). The Dispose() method disposes the instance on the master page as well which leads to scripts running there will stop to function after an async postback (updateprogress for example). So, is there a way to check if a script already exists on the page using ScriptManager or any other tools, that will prevent me of inserting duplicate scripts? Also, is there a way to remove certain script blocks (when i am removing an item in itemtemplate for example). Big thanks in advance.

    Read the article

  • Forcing Kernel::method_name to be called in Ruby

    - by Peter
    I want to add a foo method to Ruby's Kernel module, so I can write foo(obj) anywhere and have it do something to obj. Sometimes I want a class to override foo, so I do this: module Kernel private # important; this is what Ruby does for commands like 'puts', etc. def foo x if x.respond_to? :foo x.foo # use overwritten method. else # do something to x. end end end this is good, and works. but, what if I want to use the default Kernel::foo in some other object that overwrites foo? Since I've got an instance method foo, I've lost the original binding to Kernel::foo. class Bar def foo # override behaviour of Kernel::foo for Bar objects. foo(3) # calls Bar::foo, not the desired call of Kernel::foo. Kernel::foo(3) # can't call Kernel::foo because it's private. # question: how do I call Kernel::foo on 3? end end Is there any clean way to get around this? I'd rather not have two different names, and I definitely don't want to make Kernel::foo public.

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • Django IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • How to detect .NET WPF memory leak or GC long run?

    - by Néstor Sánchez A.
    I have the next very strange situation and problem: .NET 4.0 application for diagram editing (WPF). Runs ok in my PC: 8GM RAM, 3.0GHz, i7 quad-core. While creating objects (mostly diagram nodes and connectors, plus all the undo/redo information) the TaskManager show, as expected, some memory usage "jumps" (up and down). These mem-usage "jumps" also remains executing AFTER user interaction ended. Maybe this is the GC cleaning/regorganizing memory? To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction. PROBLEM: It Freezes/Hangs after seconds or minutes of usage in some slow/weak laptos/netbooks of my beta testers (under 2GHz of speed and under 2GB of RAM). I was thinking of a memory leak, but... EDIT: Also, there is the case that the memory usage grows and grows until collapse (only in slow machines). In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine without mem-usage "jumps" after user interaction (no GC cleaning?!). So, I really have a big trouble because I cannot reproduce the error, only see these strange behaviour (mem jumps), and the tool supposed to show me what is happening is hiding the problem (like the "observer's paradox"). Any ideas on what's happening and how to solve it?

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • NSTableView with columns bound to different NSArrayControllers

    - by Vyacheslav Karpukhin
    i have NSTableView and two columns in it: NSTableColumn *column = [[[NSTableColumn alloc] initWithIdentifier:@"custId"] autorelease]; [column bind:@"value" toObject:arrC2 withKeyPath:@"arrangedObjects.custId" options:nil]; [table addTableColumn:column]; column = [[[NSTableColumn alloc] initWithIdentifier:@"totalGrams"] autorelease]; [column bind:@"value" toObject:valuationArrC withKeyPath:@"arrangedObjects.totalGrams_double" options:nil]; [table addTableColumn:column]; as you can see, columns bound to different NSArrayControllers. first column shows correct values, but second just shows "(" symbol. but if i swap columns like this: NSTableColumn *column = [[[NSTableColumn alloc] initWithIdentifier:@"totalGrams"] autorelease]; [column bind:@"value" toObject:valuationArrC withKeyPath:@"arrangedObjects.totalGrams_double" options:nil]; [table addTableColumn:column]; column = [[[NSTableColumn alloc] initWithIdentifier:@"custId"] autorelease]; [column bind:@"value" toObject:arrC2 withKeyPath:@"arrangedObjects.custId" options:nil]; [table addTableColumn:column]; then i see values of first column (which was second in the first example) and again "(" in the second column. i don't understand that behaviour. how can i bound two array controllers to one table?

    Read the article

  • How to get a template tag to auto-check a checkbox in Django

    - by Daniel Quinn
    I'm using a ModelForm class to generate a bunch of checkboxes for a ManyToManyField but I've run into one problem: while the default behaviour automatically checks the appropriate boxes (when I'm editing an object), I can't figure out how to get that information in my own custom templatetag. Here's what I've got in my model: ... from django.forms import CheckboxSelectMultiple, ModelMultipleChoiceField interests = ModelMultipleChoiceField(widget=CheckboxSelectMultiple(), queryset=Interest.objects.all(), required=False) ... And here's my templatetag: @register.filter def alignboxes(boxes, cls): """ Details on how this works can be found here: http://docs.djangoproject.com/en/1.1/howto/custom-template-tags/ """ r = "" i = 0 for box in boxes.field.choices.queryset: r += "<label for=\"id_%s_%d\" class=\"%s\"><input type=\"checkbox\" name=\"%s\" value=\"%s\" id=\"id_%s_%d\" /> %s</label>\n" % ( boxes.name, i, cls, boxes.name, box.id, boxes.name, i, box.name ) i = i + 1 return mark_safe(r) The thing is, I'm only doing this so I can wrap some simpler markup around these boxes, so if someone knows how to make that happen in an easier way, I'm all ears. I'd be happy with knowing a way to access whether or not a box should be checked though.

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • How Can I Find What's Causing My Transaction to Get Promoted?

    - by Damian Powell
    I have web site which serves web services (a mixture of .asmx and WCF) which is mostly using LINQ to SQL and System.Transactions. Occaisionally we see the transaction get promoted to a distributed transaction which causes problems because our web servers are isolated from our databases in such a way that it is not possible for us to use MSDTC. I have configured tracing for System.Transactions by adding the following to my web.config: <system.diagnostics> <sources> <source name="System.Transactions" switchValue="Information"> <listeners> <add name="tx" type="System.Diagnostics.XmlWriterTraceListener" initializeData="tx.log" /> </listeners> </source> </sources> </system.diagnostics> It's very interesting and shows me when the transaction is promoted, but I find that it doesn't really help be discover why. Is there an equivalent tracing mechanism for ADO.NET that will show me when connections are created, including the variables that affect pooling (user, cnn string, transaction scope)?

    Read the article

  • Is it safe to silently catch ClassCastException when searching for a specific value?

    - by finnw
    Suppose I am implementing a sorted collection (simple example - a Set based on a sorted array.) Consider this (incomplete) implementation: import java.util.*; public class SortedArraySet<E> extends AbstractSet<E> { @SuppressWarnings("unchecked") public SortedArraySet(Collection<E> source, Comparator<E> comparator) { this.comparator = (Comparator<Object>) comparator; this.array = source.toArray(); Collections.sort(Arrays.asList(array), this.comparator); } @Override public boolean contains(Object key) { return Collections.binarySearch(Arrays.asList(array), key, comparator) >= 0; } private final Object[] array; private final Comparator<Object> comparator; } Now let's create a set of integers Set<Integer> s = new SortedArraySet<Integer>(Arrays.asList(1, 2, 3), null); And test whether it contains some specific values: System.out.println(s.contains(2)); System.out.println(s.contains(42)); System.out.println(s.contains("42")); The third line above will throw a ClassCastException. Not what I want. I would prefer it to return false (as HashSet does.) I can get this behaviour by catching the exception and returning false: @Override public boolean contains(Object key) { try { return Collections.binarySearch(Arrays.asList(array), key, comparator) >= 0; } catch (ClassCastException e) { return false; } } Assuming the source collection is correctly typed, what could go wrong if I do this?

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • MSSQL: Views that use SELECT * need to be recreated if the underlying table changes

    - by cbp
    Is there a way to make views that use SELECT * stay in sync with the underlying table. What I have discovered is that if changes are made to the underlying table, from which all columns are to be selected, the view needs to be 'recreated'. This can be achieved simly by running an ALTER VIEW statement. However this can lead to some pretty dangerous situations. If you forgot to recreate the view, it will not be returning the correct data. In fact it can be returning seriously messed up data - with the names of the columns all wrong and out of order. Nothing will pick up that the view is wrong unless you happened to have it covered by a test, or a data integrity check fails. For example, Red Gate SQL Compare doesn't pick up the fact that the view needs to be recreated. To replicate the problem, try these statements: CREATE TABLE Foobar (Bar varchar(20)) CREATE VIEW v_Foobar AS SELECT * FROM Foobar INSERT INTO Foobar (Bar) VALUES ('Hi there') SELECT * FROM v_Foobar ALTER TABLE Foobar ADD Baz varchar(20) SELECT * FROM v_Foobar DROP VIEW v_Foobar DROP TABLE Foobar I am tempted to stop using SELECT * in views, which will be a PITA. Is there a setting somewhere perhaps that could fix this behaviour?

    Read the article

  • maximum memory which malloc can allocate!

    - by Vikas
    I was trying to figure out how much memory I can malloc to maximum extent on my machine (1 Gb RAM 160 Gb HD Windows platform). I read that maximum memory malloc can allocate is limited to physical memory.(on heap) Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require. So to confirm,I wrote a small program in C, int main(){ int *p; while(1){ p=(int *)malloc(4); if(!p)break; } } Hoping that there would be a time when memory allocation will fail and loop will be breaked. But my computer hanged as It was an infinite loop. I waited for about an hour and finally I had to forcely shut down my computer. Some questions: Does malloc allocate memory from HD also? What was the reason for above behaviour? Why didn't loop breaked at any point of time.? Why wasn't there any allocation failure?

    Read the article

  • ASP.NET GridView "Client-Side Confirmation when Deleting" stopped working on ie - how come?

    - by tarnold
    A few months ago, I have programmed an ASP.NET GridView with a custom "Delete" LinkButton and Client-Side JavaScript Confirmation according to this msdn article: http://msdn.microsoft.com/en-us/library/bb428868.aspx (published in April 2007) or e.g. http://stackoverflow.com/questions/218733/javascript-before-aspbuttonfield-click The code looks like this: <ItemTemplate> <asp:LinkButton ID="deleteLinkButton" runat="server" Text="Delete" OnCommand="deleteLinkButtonButton_Command" CommandName='<%# Eval("id") %>' OnClientClick='<%# Eval("id", "return confirm(\"Delete Id {0}?\")") %>' /> </ItemTemplate> Surprisingly, "Cancel" doesn't work no more with my ie (Version: 6.0.2900.2180.xpsp_sp2_qfe.080814-1242) - it always deletes the row. With Opera (Version 9.62) it still works as expeced and described in the msdn article. More surprisingly, on a fellow worker's machine with the same ie version, it still works ("Cancel" will not delete the row). The generated code looks like <a onclick="return confirm(...);" href="javascript:__doPostBack('...')"> As confirm(...) returns false on "Cancel", I expect the __doPostBack event in the href not to be fired. Are there any strange ie settings I accidentally might have changed? What else could be the cause of this weird behaviour? Or is this a "please reinstall WinXP" issue?

    Read the article

  • Testing a method used from an abstract class

    - by Bas
    I have to Unit Test a method (runMethod()) that uses a method from an inhereted abstract class to create a boolean. The method in the abstract class uses XmlDocuments and nodes to retrieve information. The code looks somewhat like this (and this is extremely simplified, but it states my problem) namespace AbstractTestExample { public abstract class AbstractExample { public string propertyValues; protected XmlNode propertyValuesXML; protected string getProperty(string propertyName) { XmlDocument doc = new XmlDocument(); doc.Load(new System.IO.StringReader(propertyValues)); propertyValuesXML= doc.FirstChild; XmlNode node = propertyValuesXML.SelectSingleNode(String.Format("property[name='{0}']/value", propertyName)); return node.InnerText; } } public class AbstractInheret : AbstractExample { public void runMethod() { bool addIfContains = (getProperty("AddIfContains") == null || getProperty("AddIfContains") == "True"); //Do something with boolean } } } So, the code wants to get a property from a created XmlDocument and uses it to form the result to a boolean. Now my question is, what is the best solution to make sure I have control over the booleans result behaviour. I'm using Moq for possible mocking. I know this code example is probably a bit fuzzy, but it's the best I could show. Hope you guys can help.

    Read the article

  • Permuting a binary tree without the use of lists

    - by Banang
    I need to find an algorithm for generating every possible permutation of a binary tree, and need to do so without using lists (this is because the tree itself carries semantics and restraints that cannot be translated into lists). I've found an algorithm that works for trees with the height of three or less, but whenever I get to greater hights, I loose one set of possible permutations per height added. Each node carries information about its original state, so that one node can determine if all possible permutations have been tried for that node. Also, the node carries information on weather or not it's been 'swapped', i.e. if it has seen all possible permutations of it's subtree. The tree is left-centered, meaning that the right node should always (except in some cases that I don't need to cover for this algorithm) be a leaf node, while the left node is always either a leaf or a branch. The algorithm I'm using at the moment can be described sort of like this: if the left child node has been swapped swap my right node with the left child nodes right node set the left child node as 'unswapped' if the current node is back to its original state swap my right node with the lowest left nodes' right node swap the lowest left nodes two childnodes set my left node as 'unswapped' set my left chilnode to use this as it's original state set this node as swapped return null return this; else if the left child has not been swapped if the result of trying to permute left child is null return the permutation of this node else return the permutation of the left child node if this node has a left node and a right node that are both leaves swap them set this node to be 'swapped' The desired behaviour of the algoritm would be something like this: branch / | branch 3 / | branch 2 / | 0 1 branch / | branch 3 / | branch 2 / | 1 0 <-- first swap branch / | branch 3 / | branch 1 <-- second swap / | 2 0 branch / | branch 3 / | branch 1 / | 0 2 <-- third swap branch / | branch 3 / | branch 0 <-- fourth swap / | 1 2 and so on... Sorry for the ridiculisly long and waddly explanation, would really, really apreciate any sort of help you guys could offer me. Thanks a bunch!

    Read the article

  • Issue scrolling table view with content inset

    - by Rog
    Hi all, I am experiencing a weird scrolling issue and I was hoping someone could give me a hand in trying to identify why this is happening. I have included the part of my code that I think is relevant to the question but am happy to update this post with whatever else is needed. I have implemented a pull to refresh view in the tableview's content inset area. The refresh fires an Async NSURLConnection that pulls data from a webserver, parses the relevant information and refreshes the table as required. This is where the refresh process kicks off: - (void)scrollViewDidEndDragging:(UIScrollView *)scrollView willDecelerate:(BOOL)decelerate{ if (scrollView.contentOffset.y <= - 65.0f && !self.reloading) { self.reloading = YES; [self reloadTableViewDataSource]; [refreshHeaderView setState:EGOOPullRefreshLoading]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.2]; self.tableView.contentInset = UIEdgeInsetsMake(60.0f, 0.0f, 0.0f, 0.0f); [UIView commitAnimations]; } } Problem is if I start to scroll whilst the content inset is "visible" (i.e. during reload) I get this weird behaviour where my table sections do not scroll all the way to the top - see screenshot for a clear visual of what I am trying to describe here. I have included a couple of screenshots below that clearly identify what is happening at the moment. Has anyone experienced this before? Any ideas on what I should be looking at to try and fix it? Many thanks in advance, Rog And this is the result if I start scrolling the table. The orange bit at the top of the image is the actual navigation bar, where I would expect the table section (date 1 December 2010) to be.

    Read the article

  • Partial template specialization for more than one typename

    - by Matt Joiner
    In the following code, I want to consider functions (Ops) that have void return to instead be considered to return true. The type Retval, and the return value of Op are always matching. I'm not able to discriminate using the type traits shown here, and attempts to create a partial template specialization based on Retval have failed due the presence of the other template variables, Op and Args. How do I specialize only some variables in a template specialization without getting errors? Is there any other way to alter behaviour based on the return type of Op? template <typename Retval, typename Op, typename... Args> Retval single_op_wrapper( Retval const failval, char const *const opname, Op const op, Cpfs &cpfs, Args... args) { try { CallContext callctx(cpfs, opname); Retval retval; if (std::is_same<bool, Retval>::value) { (callctx.*op)(args...); retval = true; } else { retval = (callctx.*op)(args...); } assert(retval != failval); callctx.commit(cpfs); return retval; } catch (CpfsError const &exc) { cpfs_errno_set(exc.fserrno); LOGF(Info, "Failed with %s", cpfs_errno_str(exc.fserrno)); } return failval; }

    Read the article

  • Message sent to deallocated instance which has never been released

    - by Jakub
    Hello, I started dealing with NSOperations and (as usual with concurrency) I'm observing strange behaviour. In my class I've got an instance variable: NSMutableArray *postResultsArray; when one button in the UI is pressed I initialize the array: postResultsArray = [NSMutableArray array]; and setup the operations (together with dependencies). In the operations I create a custom object and try to add to the array: PostResult *result = [[PostResult alloc] initWithServiceName:@"Sth" andResult:someResult]; [self.postResultsArray addObject:result]; and while adding I get: -[CFArray retain]: message sent to deallocated instance 0x3b40c30 which is strange as I don't release the array anywhere in my code (I did, but when the problem started to appear I commented all the release operations to be sure that they are not the case). I also used to have @synchronized section like below: PostResult *result = [[PostResult alloc] initWithServiceName:@"Sth" andResult:someResult]; @synchronized (self.postResultsArray) { [self.postResultsArray addObject:result]; } but the problem was the same (however, the error was for the synchronized operation). Any ideas what I may be doing wrong?

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >