Search Results

Search found 26908 results on 1077 pages for 'asynchronous wcf call'.

Page 294/1077 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Stop Child Elements from Inheriting Parent Style, Broken Tabs Javascript

    - by WillingLearner
    I am using the flowplayer jquery tabs plugin: http://flowplayer.org/tools/tabs/index.html Im having mucho difficulty when placing my child elements inside the panes divs, and stopping them from inheriting the style of the container pane div. Its breaking my layout to pieces and i need to know how to override the style and just keep the container panes div only to itself. Also, im having a devil of a time trying to call 2 different sets of tabs and panes. Im not getting the classes and IDs right, the javascript, or something along those lines. How would i set this up so i can call (tabs A / panes A) and then (tabs B / panes B), css wise, and javascript wise? My current javascript is: <!-- This JavaScript snippet activates the tabs --> <script> // perform JavaScript after the document is scriptable. $(function() { // setup ul.tabs to work as tabs for each div directly under div.panes $("ul.tabs").tabs("div.panes1 > div"); //$("ul.tabs.myprofile").tabs("div.panes > div"); }); </script> This only works for 1 set of tabs and panes on a page. Dosent help me much if i want to call 2 totally different sets. Ive gone over the documentation many times but im still not getting it. Please help me find a solution to BOTH of my problems. Thanks.

    Read the article

  • Rails - eager load the number of associated records, but not the record themselves.

    - by Max Williams
    I have a page that's taking ages to render out. Half of the time (3 seconds) is spent on a .find call which has a bunch of eager-loaded associations. All i actually need is the number of associated records in each case, to display in a table: i don't need the actual records themselves. Is there a way to just eager load the count? Here's a simplified example: @subjects = Subject.find(:all, :include => [:questions]) In my table, for each row (ie each subject) i just show the values of the subject fields and the number of associated questions for each subject. Can i optimise the above find call to suit these requirements? I thought about using a group field but my full call has a few different associations included, with some second-order associations, so i don't think group by will work. @subjects = Subject.find(:all, :include => [{:questions => :tags}, {:quizzes => :tags}], :order => "subjects.name") :tags in this case is a second-order association, via taggings. Here's my associations in case it's not clear what's going on. Subject has_many :questions has_many :quizzes Question belongs_to :subject has_many :taggings has_many :tags, :through => :taggings Quiz belongs_to :subject has_many :taggings has_many :tags, :through => :taggings Grateful for any advice - max

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • How to execute PHPUnit?

    - by user1280667
    PHPUnit can execute script like this: phpunit --log-junit classname filename.php (i need the XML report , for my continus integreation platform) but my problem is that i work with a MVC framework and all pages are called through pathofproject/indexCLI.php module=moduleName class=className ect with 3 arguments in total(when i use the shell commande and path/index.php argum=... with url) so i cant call phpunit pathofproject/indexCLI.php module=moduleName class=className . So i think to a lot of solution , i hope you can help me to use one of them. first how can i use phpunit commande with this type of calling, because i cant do it because he is waiting a classname and a filename (default comportement) if it possible !! when i call the same link in shell like this : php path/indexCLI.php module="blabla" ect ... i have the result of assertion in my consol , but cant use XML Junit option , can i do it ? my last solution is to call the link in a navigator like mozzila , but i dont know how to say to phpunit runner to chose XML report and not HTML report. the aim for me , is to have a XML report .

    Read the article

  • for cycle not works allright

    - by joseph
    Hello. I call addNotify() method in class that I posted here. The problem is, that when I call addNotify() as it is in the code, setKeys(objs) do nothing. Nothing appears in my explorer of running app. But when I call addNotify()without loop(for int....), and add only one item to ArrayList, it shows that one item correctly. Does anybody knows where can be problem? See the cede class ProjectsNode extends Children.Keys{ private ArrayList objs = new ArrayList(); public ProjectsNode() { } @Override protected Node[] createNodes(Object o) { MainProject obj = (MainProject) o; AbstractNode result = new AbstractNode (new DiagramsNode(), Lookups.singleton(obj)); result.setDisplayName (obj.getName()); return new Node[] { result }; } @Override protected void addNotify() { //this loop causes nothing appears in my explorer. //but when I replace this loop by single line "objs.add(new MainProject("project1000"));", it shows that one item in explorer for (int i=0;i==10;i++){ objs.add(new MainProject("project1000")); } setKeys (objs); } }

    Read the article

  • "NT AUTHORITY\ANONYMOUS LOGON" error in Windows 7 (ASP.NET & Web Service)

    - by Tony_Henrich
    I have an asp.net web app which works fine in Windows XP machine in a domain. I am porting it to a Windows 7 stand alone machine. The app uses a web service which makes a call to sql server. The web server (IIS 7.5) and SQL Server are on the same stand alone machine. I enabled Windows authentication for the website and web service. The web service uses a trusted connection connection string. The web service credentials uses System.Net.CredentialCache.DefaultCredentials. I noticed username, password and domainname are blank after the call! The webservice and web site use an application pool with identity "Network Service". I am getting an exception "NT AUTHORITY\ANONYMOUS LOGON" in the database call in the web service. I am assuming it's related to the blank credentials. I am expecting ASPNET user to be the security token to the database. Why is this not happening? (Usually this happens when sql server and web server are on two different machines in a domain, delegation & double hopping, but in my case everything is on a dev box)

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • Dynamic Hierarchical Javascript Object Loop

    - by user1684586
    var treeData = {"name" : "A", "children" : [ {"name" : "B", "children": [ {"name" : "C", "children" :[]} ]} ]}; THE ARRAY BEFORE SHOULD BE EMPTY. THE ARRAY AFTER SHOULD BE POPULATED DEPENDING ON THE NUMBER OF NODES NEEDED THAT WILL BE DEFINED FROM A DYNAMIC VALUE THAT IS PASSED. I would like to build the hierarchy dynamically with each node created as a layer/level in the hierarchy having its own array of nodes. THIS SHOULD FORM A TREE STRUCTURE. This is hierarchy structure is described in the above code. This code has tree level simple for demonstrating the layout of the hierarchy of values. There should be a root node, and an undefined number of nodes and levels to make up the hierarchy size. Nothing should be fixed besides the root node. I do not need to read the hierarchy, I need to construct it. The array should start {"name" : "A", "children" : []} and every new node as levels would be created {"name" : "A", "children" : [HERE-{"name" : "A", "children" : []}]}. In the child array, going deeper and deeper. Basically the array should have no values before the call, except maybe the root node. After the function call, the array should comprise of the required nodes of a number that may vary with every call. Every child array will contain one or more node values. There should be a minimum of 2 node levels, including the root. It should initially be a Blank canvas, that is no predefined array values.

    Read the article

  • Linq to sql Repository pattern , Some doubts

    - by MindlessProgrammer
    I am using repository pattern with linq to sql, I am using a repository class per table. I want to know , am i doing at good/standard way, ContactRepository Contact GetByID() Contact GetAll() COntactTagRepository List<ContactTag> Get(long contactID) List<ContactTag> GetAll() List<ContactTagDetail> GetAllDetails() class ContactTagDetail { public Contact Contact {get;set;} public ContactTag COntactTag {get;set;} } When i need a contact i call method in contactrepository, same for contacttag but when i need contact and tags together i call GetDetais() in ContactTag repository its not returning the COntactTag entity generated by the orm insted its returning ContactTagDetail entity conatining both COntact and COntactTag generated by the orm, i know i can simple call GetAll in COntactTag repository and can access Contact.ContactTag but as its linq to sql it will there is no option to Deferred Load in query level, so whenever i need a entity with a related entity i create a projection class Another doubt is where i really need to right the method i can do it in both contact & ContactTag repostitory like In contact repository GetALlWithTags() or something but i am doing it in in COntactTag repository Whats your suggestions ?

    Read the article

  • window.setInterval from inside an object

    - by Keith Rousseau
    I'm currently having an issue where I have a javascript object that is trying to use setInterval to call a private function inside of itself. However, it can't find the object when I try to call it. I have a feeling that it's because window.setInterval is trying to call into the object from outside but doesn't have a reference to the object. FWIW - I can't get it to work with the function being public either. The basic requirement is that I may need to have multiple instances of this object to track multiple uploads that are occurring at once. If you have a better design than the current one or can get the current one working then I'm all ears. The following code is meant to continuously ping a web service to get the status of my file upload: var FileUploader = function(uploadKey) { var intervalId; var UpdateProgress = function() { $.get('someWebService', {}, function(json) { alert('success'); }); }; return { BeginTrackProgress: function() { intervalId = window.setInterval('UpdateProgress()', 1500); }, EndTrackProgress: function() { clearInterval(intervalId); } }; }; This is how it is being called: var fileUploader = new FileUploader('myFileKey'); fileUploader.BeginTrackProgress();

    Read the article

  • COCOS2D - Animation Stops During Move

    - by maiko
    Hey guys! I am trying to run a "Walk" style animation on my main game sprite. The animations work fine, and my sprite is hooked up to my joystick all fine and dandy. However, I think where I setup the call for my walk animations are wrong. Because everytime the sprite is moving, the animation stops. I know putting the animation in such a weak if statement is probably bad, but please tell me how I could get my sprite to animate properly while it is being moved by the joystick. The sprite faces the right direction, so I can tell the action's first frame is being called, however, it doesn't animate until I stop touching my joystick. Here is How I call the action: //WALK LEFT if (joypadCap.position.x <= 69 /*&& joypadCap.position.y < && joypadCap.position.y 40 */ ) { [tjSprite runAction:walkLeft]; }; //WALK RIGHT if ( joypadCap.position.x = 71 /* && joypadCap.position.y < 100 && joypadCap.position.y 40 */) { [tjSprite runAction:walkRight]; }; THIS: is the how the joystick controls the character: CGPoint newLocation = ccp(tjSprite.position.x - distance/8 * cosf(touchAngle), tjSprite.position.y - distance/8 * sinf(touchAngle)); tjSprite.position = newLocation; Please help. Any alternative ways to call the characters walk animation would be greatly appreciated!

    Read the article

  • erlang design of a card game [closed]

    - by user601836
    I would like to discuss with you a possible implementation for a card game in erlang. The only full example I found online is OpenPoker. I would like to create one myself, so here is the implementation I have in mind: A gen_server to represent a deck: when started creates a deck of cards (shuffled). And stores it in its state. provides an handle_call (draw_card) A gen_server to represent the chat room. Stores in its state the registered name of a player process (e.g. player1, player2, luke etc etc). Exports handle_cast to join the chat (executed by default when somebody joins successfully the game) and one to broadcast a chat message to all users by calling an handle_cast on the gen_server representing a player. a gen_fsm to represent a game instance. Has two states (wait_join, and turn). Exports join/1 to join the game, play_card/2 and send_msg/2. One parameter is the pid of the player process. a gen_server to represent the player. Exports only start_link/1 where the parameter is the name to use to register the process (inside the init I call join method of gen_fsm). Has different handle_calls (e.g. get_hand, draw_card) and handle_casts (e.g. play_card, deliver_msg, and send_msg) A gen_server to represent the main process. Exports (join_game/1 which calls player:start_link/1, send_msg/2 to call player's send_msg, play_card/3 to call player's play_card). What do you think of this architecture? Thanks in advance

    Read the article

  • How to make that the LanguageBinder take precedence over the DynamicBinder

    - by rudimenter
    Hi I Have a class which implement IDynamicMetaObjectProvider I implement the BindGetMember Method from DynamicMetaObject. Now when i Generate a dynamic Object and Access a property every call gets implicit passed through the BindGetMember Method. I want that at first the language Binder get his chance before my code comes in. It is somehow doable with "binder.FallbackGetMember" but i am not sure how the expression has to look like. I call here dynamic com=CommandFactory.GetCommand(); com.testprop; //expected: "test"; but "test2" comes back public class Command : System.Dynamic.IDynamicMetaObjectProvider { public string testprop { get { return "test"; } } public object GetValue(string name) { return "test2"; } System.Dynamic.DynamicMetaObject System.Dynamic.IDynamicMetaObjectProvider.GetMetaObject(System.Linq.Expressions.Expression parameter) { return new MetaCommand(parameter, this); } private class MetaCommand : System.Dynamic.DynamicMetaObject { public MetaCommand(Expression expression, Command value) : base(expression, System.Dynamic.BindingRestrictions.Empty, value) { } public override System.Dynamic.DynamicMetaObject BindGetMember(System.Dynamic.GetMemberBinder binder) { var self = this.Expression; var bag = (Command)base.Value; Expression target; target = Expression.Call( Expression.Convert(self, typeof(Command)), typeof(Command).GetMethod("GetValue"), Expression.Constant(binder.Name) ); var restrictions = BindingRestrictions .GetInstanceRestriction(self, bag); return new DynamicMetaObject(target, restrictions); } #endregion } }

    Read the article

  • SharePoint and Log4Net

    - by Nico
    I'm looking for best practices to integrate log4net to SharePoint for web request, feature activation and all timer stuff. I have several subprojects in my farm, and I would like to have only one Log4Net.config file. [Edit] Not only I need to configure log4net for the web application, which is easy to do (I use global.asax, and a log4net.config file, so I can modify log settings withtout reloading the webapp), but I also need to log asynchronous events: Event Handler (like ItemAdded) Timer Jobs ...

    Read the article

  • Calling SDL/OpenGL from Assembly code on Linux

    - by Lie Ryan
    I'm write a simple graphic-based program in Assembly for learning purpose; for this, I intended to use either OpenGL or SDL. I'm trying to call OpenGL/SDL's function from assembly. The problem is, unlike many assembly and OpenGL/SDL tutorials I found in the internet, the OpenGL/SDL in my machine apparently doesn't use C calling convention. I wrote a simple program in C, compile it to assembly (using -S switch), and apparently the assembly code that is generated by GCC calls the OpenGL/SDL functions by passing parameters in the registers instead of being pushed to the stack. Now, the question is, how do I determine how to pass arguments to these OpenGL/SDL functions? That is, how do I figure out which argument corresponds to which registers? Obviously since GCC can compile C code to call OpenGL/SDL, so therefore there must be a way to figure out the correspondence between function arguments and registers. In C calling conventions, the rule is easy, push parameters backwards and return value in eax/rax, I can simply read their C documentation and I can easily figure out how to pass the parameters. But how about these? Is there a way to call OpenGL/SDL using C calling convention? btw, I'm using yasm, with gcc/ld as the linker on Gentoo Linux amd64.

    Read the article

  • Help with jQuery Ajax calling ASMX which returns string.

    - by Jason Evans
    Hi there. I have the following jQuery ajax call setup: function Testing() { var result = ''; $.ajax({ url: 'BookingUtils.asmx/GetNonBookableSlots', dataType: 'text', error: function(error) { alert('GetNonBookableSlots Error'); }, success: function(data) { alert('GetNonBookableSlots'); result = data; } }); return result; } Here is the web service I'm trying to call: [WebMethod] public string GetNonBookableSlots() { return "fhsdfuhsiufhsd"; } When I run the jQuery code, there is no error or success event fired (none of the alerts are called). In fact, nothing happens at all, the javascript code just moves on to return statement at the end. I put a breakpoint in the web service code and it does get hit, but when I leave that method I end up on the return statement anyway. Can someone give me some tips on how I should be configuring the ajax call correctly, as I feel that I'm doing this wrong. The webservice just needs to return a string, no XML or JSON involved. Cheers. Jas.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • How can I alter an external variable from inside my AJAX?

    - by tmedge
    I keep on having these same two problems. I have been trying to use Remy Sharp's wonderful tagSuggest plugin, and it works great. Until I try to use an AJAX call to get tags from my database. My setGlobalTags() works great, with my defined myTagList at the top of the function. What I want to do is set myTagList equal to the result from my AJAX. My problem is that I can neither call setGlobalTags() from inside my success or error functions, nor actually alter the original myTagList. Also, I keep on having this other issue as well. I keep this code in my Master page, and my AJAX returns 'success' on almost every page. I only (and always) get the error alert when I navigate to a page that actually contains something of id="ParentTags". I don't see why this is happening, because my $('#ParentTags').tagSuggest(); is definitely after my AJAX call. I realize that this is probably just some dumb conventions error, but I am new to this and I'm here to learn from you guys. Thanks in advance! $(function() { var myTagList = ['test', 'testMore', 'testALot']; $.ajax({ type: "POST", url: 'Admin/GetTagList', dataType: 'json', success: function(resultTags) { myTagList = resultTags; alert(myTagList[0]); setGlobalTags(myTagList); }, error: function() { alert('Error'); setGlobalTags(myTagList); } }); setGlobalTags(myTagList); $('#ParentTags').tagSuggest(); });

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • UIAlertView choice causing resignFirstResponder to fail

    - by Chazbot
    Hi everyone. I'm having a similar issue to Anthony Chan's question, and after trying every suggested solution, I'm still stuck. Somehow, only after interacting with my UIAlertView, I'm unable to dismiss the keyboard in another view of my app. It's as though the Alert is breaking my UITextField's ability to resignFirstResponder. Below I instantiate my UIAlertView, which then calls its didDismissWIthButtonIndex method. Then, I call the showInfo method, which loads another UIViewController. UIAlertView *emailFailAlert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"error message text." delegate:self cancelButtonTitle:@"Not now" otherButtonTitles:@"Settings", nil]; [emailFailAlert setTag:2]; [emailFailAlert show]; [emailFailAlert release]; Once the 'Settings' option is pressed, I'm calling this method: - (void)alertView:(UIAlertView *)alertView didDismissWithButtonIndex:(NSInteger)buttonIndex { if ([alertView tag] == 2) { if (buttonIndex == 1){ [self showInfo:nil]; } } } My showInfo method loads the other ViewController, via the code below: - (IBAction)showInfo:(id)sender { FlipsideViewController *fscontroller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; fscontroller.delegate = self; fscontroller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:fscontroller animated:YES]; [fscontroller release]; } Upon clicking any textField in this Flipside VC, I'm unable to dismiss the keyboard as I normally can with - (BOOL)textFieldShouldReturn:(UITextField *)textField, and [textField resignFirstResponder]. I've omitted this code bc this question is getting long, but I'm happy to post if necessary. The interesting part is that if I comment out the [self showInfo:nil] call made when the button is clicked and call it by clicking a test button (outside the alertView didDismissWithButtonIndex: method), everything works fine. Any idea what's happening here? Thanks in advance!

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • What is the purpose of the s==NULL case for mbrtowc?

    - by R..
    mbrtowc is specified to handle a NULL pointer for the s (multibyte character pointer) argument as follows: If s is a null pointer, the mbrtowc() function shall be equivalent to the call: mbrtowc(NULL, "", 1, ps) In this case, the values of the arguments pwc and n are ignored. As far as I can tell, this usage is largely useless. If ps is not storing any partially-converted character, the call will simply return 0 with no side effects. If ps is storing a partially-converted character, then since '\0' is not valid as the next byte in a multibyte sequence ('\0' can only be a string terminator), the call will return (size_t)-1 with errno==EILSEQ. and leave ps in an undefined state. The intended usage seems to have been to reset the state variable, particularly when NULL is passed for ps and the internal state has been used, analogous to mbtowc's behavior with stateful encodings, but this is not specified anywhere as far as I can tell, and it conflicts with the semantics for mbrtowc's storage of partially-converted characters (if mbrtowc were to reset state when encountering a 0 byte after a potentially-valid initial subsequence, it would be unable to detect this dangerous invalid sequence). If mbrtowc were specified to reset the state variable only when s is NULL, but not when it points to a 0 byte, a desirable state-reset behavior would be possible, but such behavior would violate the standard as written. Is this a defect in the standard? As far as I can tell, there is absolutely no way to reset the internal state (used when ps is NULL) once an illegal sequence has been encountered, and thus no correct program can use mbrtowc with ps==NULL.

    Read the article

  • Anyone know exactly which JMS messages will be redelivered in CLIENT_ACKNOWLEDGE mode if the client

    - by user360612
    The spec says "Acknowledging a consumed message automatically acknowledges the receipt of all messages that have been delivered by its session" - but what I need to know is what it means by 'delivered'. For example, if I call consumer.receive() 6 times, and then call .acknowledge on the 3rd message - is it (a) just the first 3 messages that are ack'd, or (b) all 6? I'm really hoping it's option a, i.e. messages after the one you called acknowledge on WILL be redelivered, otherwise it's hard to see how you could prevent message lost in the event of my receiver process crashing before I've had a chance to persist and acknowledge the messages. But the spec is worded such that it's not clear. I get the impression the authors of the JMS spec considered broker failure, but didn't spend too long thinking about how to protect against client failure :o( Anyway, I've been able to test with SonicMQ and found that it implements (a), i.e. messages 'received' later than the message you call .ack on DO get redelivered in the event of a crash, but I'd love to know how other people read the standard, and if anyone knows how any other providers have implemented CLIENT_ACKNOWLEDGE? (i.e. what the 'de facto' standard is) Thanks Ben

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >