Search Results

Search found 7436 results on 298 pages for 'simultaneous calls'.

Page 73/298 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How can I use a single-table inheritance and single controller to make this more DRY?

    - by Angela
    I have three models, Calls, Emails, and Letters and those are basically templates of what gets sent to individuals, modeled as Contacts. When a Call is made, a row in model in ContactCalls gets created. If an Email is sent, an entry in ContactEmails is made. Each has its own controller: contact_calls_controller.rb and contact_emails_controller.rb. I would like to create a single table inheritance called ContactEvents which has types Calls, Emails, and Letters. But I'm not clear how I pass the type information or how to consolidate the controllers. Here's the two controllers I have, as you can see, there's alot of duplication, but some differences that needs to be preserved. In the case of letter and postcards (another Model), it's even more so. class ContactEmailsController < ApplicationController def new @contact_email = ContactEmail.new @contact_email.contact_id = params[:contact] @contact_email.email_id = params[:email] @contact = Contact.find(params[:contact]) @company = Company.find(@contact.company_id) contacts = @company.contacts.collect(&:full_name) contacts.each do |contact| @colleagues = contacts.reject{ |c| [email protected]_name } end @email = Email.find(@contact_email.email_id) @contact_email.subject = @email.subject @contact_email.body = @email.message @email.message.gsub!("{FirstName}", @contact.first_name) @email.message.gsub!("{Company}", @contact.company_name) @email.message.gsub!("{Colleagues}", @colleagues.to_sentence) @email.message.gsub!("{NextWeek}", (Date.today + 7.days).strftime("%A, %B %d")) @contact_email.status = "sent" end def create @contact_email = ContactEmail.new(params[:contact_email]) @contact = Contact.find_by_id(@contact_email.contact_id) @email = Email.find_by_id(@contact_email.email_id) if @contact_email.save flash[:notice] = "Successfully created contact email." # send email using class in outbound_mailer.rb OutboundMailer.deliver_campaign_email(@contact,@contact_email) redirect_to todo_url else render :action => 'new' end end AND: class ContactCallsController < ApplicationController def new @contact_call = ContactCall.new @contact_call.contact_id = params[:contact] @contact_call.call_id = params[:call] @contact_call.status = params[:status] @contact = Contact.find(params[:contact]) @company = Company.find(@contact.company_id) @contact = Contact.find(@contact_call.contact_id) @call = Call.find(@contact_call.call_id) @contact_call.title = @call.title contacts = @company.contacts.collect(&:full_name) contacts.each do |contact| @colleagues = contacts.reject{ |c| [email protected]_name } end @contact_call.script = @call.script @call.script.gsub!("{FirstName}", @contact.first_name) @call.script.gsub!("{Company}", @contact.company_name ) @call.script.gsub!("{Colleagues}", @colleagues.to_sentence) end def create @contact_call = ContactCall.new(params[:contact_call]) if @contact_call.save flash[:notice] = "Successfully created contact call." redirect_to contact_path(@contact_call.contact_id) else render :action => 'new' end end

    Read the article

  • socket timeout and remove O_NONBLOCK option

    - by juxstapose
    Hello, I implemented a socket timeout and retry but in order to do it I had to set the socket as a non-blocking socket. However, I need the socket to block. This was my attempt at a solution to these two problems. This is not working. Subsequent send calls block but never send any data. When I connect without the select and the timeout, subsequent send calls work normally. References: C: socket connection timeout How to reset a socket back to blocking mode (after I set it to nonblocking mode)? Code: fd_set fdset; struct timeval tv; fcntl(dsock, F_SETFL, O_NONBLOCK); tv.tv_sec = theDeviceTimeout; tv.tv_usec = 0; int retries=0; logi(theLogOutput, LOG_INFO, "connecting to device socket num retrys: %i", theDeviceRetry); for(retries=0;retries<theDeviceRetry;retries++) { connect(dsock, (struct sockaddr *)&daddr, sizeof daddr); FD_ZERO(&fdset); FD_SET(dsock, &fdset); if (select(dsock + 1, NULL, &fdset, NULL, &tv) == 1) { int so_error; socklen_t slen = sizeof so_error; getsockopt(dsock, SOL_SOCKET, SO_ERROR, &so_error, &slen); if (so_error == 0) { logi(theLogOutput, LOG_INFO, "connected to socket on port %i on %s", theDevicePort, theDeviceIP); break; } else { logi(theLogOutput, LOG_WARN, "connect to %i failed on ip %s because %s retries %i", theDevicePort, theDeviceIP, strerror(errno), retries); logi(theLogOutput, LOG_WARN, "failed to connect to device %s", strerror(errno)); logi(theLogOutput, LOG_WARN, "error: %i %s", so_error, strerror(so_error)); continue; } } } int opts; opts = fcntl(dsock,F_GETFL); logi(theLogOutput, LOG_DEBUG, "clearing nonblock option %i retries %i", opts, retries); opts ^= O_NONBLOCK; fcntl(dsock, F_SETFL, opts);

    Read the article

  • Weird output of Throwable getMessage()

    - by Ravi Gupta
    Hi I have below pseudo code with throws an exception like this throw new MyException("Bad thing happened","com.stuff.errorCode"); where MyException extends Exception class. So the problem is when I try to get the message from MyException class by calling myEx.getMessage() it returns ???en_US.Bad thing happened??? instead of my original message i.e. Bad thing happened I have checked that MyException class doesn't overrides Throwable class's getMessage() behavior. Below is the how the call passes from MyException.getMessage() to Throwable.getMessage() public MyException(String msg, String sErrorCode){ super(msg); this.sErrorCode = sErrorCode; this.iSeverity = 0; } which then calls public Exception(String message) { super(message); } and finally public Throwable(String message) { fillInStackTrace(); detailMessage = message; } when I do a getMessage on myexception it calls Throwable's getMessage as below public String getMessage() { return detailMessage; } So ideally it should return the original message as I set when throwing the exception. What's the ???en_US thing ?

    Read the article

  • jQuery ajax form submit - how to ensure dynamically loaded form's action is used

    - by kenny99
    Hi, i'm having a problem with dynamically loaded forms - instead of using the action attribute of the newly loaded form, my jquery code is still using the action attribute of the first form loaded. I have the following code: //generic ajax form handler - calls next page load on success $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $(this).attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); }); But when the next form is loaded, the above code does not pick up the new action attribute. I have tried adding the above code to my callback on successful ajax load (shown below), but this doesn't make any difference. Can anyone help? Many thanks function ajaxFormStage(url) { var $data = $('#main_body #content'); $.validationEngine.closePrompt('body'); //close any validation messages $data.fadeOut('fast', function(){ $data.load(url, function(){ $data.animate({ opacity: 'show' }, 'fast'); '); //generic ajax form handler - calls next page load on success $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $(this).attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); }); }); });

    Read the article

  • How can a SVN::Error callback identify the context from which it was called

    - by Colin Fine
    I've written some fairly extensive Perl modules and scripts using the Perl bindings SVN::Client etc. Since the calls to SVN::Client are all deep in a module, I have overridden the default error handling. So far I have done so by setting $SVN::Error::handler = undef as described in [1], but this makes the individual calls a bit messy because you have to remember to make each call to SVN::Client in list context and test the first value for errors. I would like to switch to using an error handler I would write; but $SVN::Error::handler is global, so I can't see any way that my callback can determine where the error came from, and what object to set an error code in. I wondered if I could use a pool for this purpose: so far I have ignored pools as irrelevant to working in Perl, but if I call a SVN::Client method with a pool I have created, will any SVN::Error object be created in the same pool? Has anybody any knowledge or experience which bears on this? [1]: http://search.cpan.org/~mschwern/Alien-SVN-1.4.6.0/src/subversion/subversion/bindings/swig/perl/native/Core.pm#svn_error_t_-_SVN::Error SVN::Core POD

    Read the article

  • stop android emulator call

    - by Shahzad Younis
    I am working on an Android application, having functionality like voicemail. I am using BroadcastReceiver to get dialing events. I have to get the event "WHEN CALL IS UNANSWERED (not picked after few rings) FROM RECEIVER". I will do some actions on caller end against this event. I am using AVD emulator, and I do call from one instance to another instance and it calls perfectly, but the problem is: It continuously calls until I reject or accept the call. This way I cannot detect that "CALL IS UNANSWERED AFTER A NUMBER OF RINGS". So I want the Caller emulator to drop the call after a number of rings (if unanswered) like a normal phone. I can do it (drop the call after some time) by writing some code, but I need the natural functionality of phone in the emulator. Can anyone please guide me? Is there any settings in the emulator? Or something else? The code is shown below in case it helps: public class MyPhoneReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Bundle extras = intent.getExtras(); if (extras != null) { String state = "my call state = " + extras.getString(TelephonyManager.EXTRA_STATE); Log.w("DEBUG", state); } }

    Read the article

  • how to elegantly duplicate a graph (neural network)

    - by macias
    I have a graph (network) which consists of layers, which contains nodes (neurons). I would like to write a procedure to duplicate entire graph in most elegant way possible -- i.e. with minimal or no overhead added to the structure of the node or layer. Or yet in other words -- the procedure could be complex, but the complexity should not "leak" to structures. They should be no complex just because they are copyable. I wrote the code in C#, so far it looks like this: neuron has additional field -- copy_of which is pointer the the neuron which base copied from, this is my additional overhead neuron has parameterless method Clone() neuron has method Reconnect() -- which exchanges connection from "source" neuron (parameter) to "target" neuron (parameter) layer has parameterless method Clone() -- it simply call Clone() for all neurons network has parameterless method Clone() -- it calls Clone() for every layer and then it iterates over all neurons and creates mappings neuron=copy_of and then calls Reconnect to exchange all the "wiring" I hope my approach is clear. The question is -- is there more elegant method, I particularly don't like keeping extra pointer in neuron class just in case of being copied! I would like to gather the data in one point (network's Clone) and then dispose it completely (Clone method cannot have an argument though).

    Read the article

  • How to return ArrayList results from an IntentService

    - by gcl1
    I have an IntentService that loads up an ArrayList with data from a network source (AWS SDB tables). The ArrayList is in a global space -- accessible to both the calling Activity and the IntentService (like this: appState = ((App)getApplicationContext())). When the IntentService is done, it notifies the Activity through a ResultReceiver, and the Activity calls adapter.notifyDataChanged() to update the ListView. This solution works most of the time, ... but it violates the rule that only the UI thread should make changes to data underlying a ListView. So as it is, I sometimes get an error: "The content of the adapter has changed but ListView did not receive a notification." I think this must be a common situation. Please let me know if you have any suggestions or best practices for this problem. Here are three options I'm aware of: Keep the IntentService, and have it store the results in another "working" ArrayList, also in the global space. When the result is ready, the IntentService calls the ResultReceiver (on the UI thread), which can then: a) copy the result to the ArrayList associated with the ListView, and b) call adapter.notifyDataChanged(). CONS: I don't like the idea of putting temp/working data in a global space, and copying the result list seems inefficient. Keep the IntentService, and have it pass the results back through a bundle loaded with a ParcelableArrayList. CONS: I'm not sure if this approach would scale for very large result sets. It also requires copying the result list. Switch to a Service which builds a local copy of the result list. Have the Activity directly access the address space of the Service in order to read the result list. CON: Still requires copying results to the ArrayList associated with the ListView. Thank you.

    Read the article

  • Why is BorderLayout calling setSize() and setBounds()?

    - by ags
    I'm trying to get my head around proper use of the different LayoutManagers to make my GUI design skills more efficient and effective. For me, that usually requires a detailed understanding of what is going on under the hood. I've found some good discussion of the interaction and consequences of a Container using BorderLayout containing a Container using FlowLayout. I understand it for the most part, but wanted to confirm my mental model and to do so I am looking at the code for BorderLayout. In the code snippet below taken from BorderLayout.layoutContainer(), note the calls to the child Component's setSize() method followed by setBounds(). Looking at the source for these methods of Component, setSize() actually calls setBounds() with the current values for Component.x and Component.y. Why is this done (and not entirely redudant?) Doesn't the setBounds() call completely overwrite the results of the setSize() call? if ((c=getChild(NORTH,ltr)) != null) { c.setSize(right - left, c.height); Dimension d = c.getPreferredSize(); c.setBounds(left, top, right - left, d.height); top += d.height + vgap; } I'm also tring to understand where/when the child Component's size is initially set (before the LayoutManager.layoutContainer() method is called). Finally, this post itself raises a "meta-question": in a situation like this, where the source is available elsewhere, is the accepteed protocol to include the entire method? Or some other way to make it easier for folks to participate in the thread? Thanks.

    Read the article

  • Entity Framework and WCf

    - by Nihilist
    Hi I am little confused on designing WCf services with EF. When using WCf and EF, where do we draw this line on what properties to return and what not to with the entity. Here is my scenario I have User. Here are the relations. User [1 to many] Address, User [ 1 to many] Email, User [ 1 to many] Phone So now on the webform, on page1 I can edit user information. say I can edit few properties on the user entity and can also edit address, phone, email entities[ like add / delete and update any] On page2, i can only update user properties and nothing related to navigation properties [ address, email, phone]. So when I return the User Entity [ OR DTO] should i be returning the navigation properties too? Or should the client make multiple calls to get navigation properites. Also, how does it go with Save? Like should the client make multiple calls to save user and related entites or just one call to save the graph? Lets say, if I just have a Save(User user) [ where user has all the related entities too] both page1 and page2 will call save and pass me the user. but one page1 i will need a lot more information. but on page2 i just need the user primitive properties. So my question is, where do we draw this line, how do we design theses services ? Is the WCF operation designed on the page and the fields it has ? I am hoping i explained my problem well enough.

    Read the article

  • What are the elegant ways to do MixIns in Python?

    - by Slava Vishnyakov
    I need to find an elegant way to do 2 kinds of MixIns. First: class A(object): def method1(self): do_something() Now, a MixInClass should make method1 do this: do_other() - A.method1() - do_smth_else() - i.e. basically "wrap" the older function. I'm pretty sure there must exist a good solution to this. Second: class B(object): def method1(self): do_something() do_more() In this case, I want MixInClass2 to be able to inject itself between do_something() and do_more(), i.e.: do_something() - MixIn.method1 - do_more(). I understand that probably this would require modifying class B - that's ok, just looking for simplest ways to achieve this. These are pretty trivial problems and I actually solved them, but my solution is tainted. Fisrt one by using self._old_method1 = self.method1(); self.method1() = self._new_method1(); and writing _new_method1() that calls to _old_method1(). Problem: multiple MixIns will all rename to _old_method1 and it is inelegant. Second MixIn one was solved by creating a dummy method call_mixin(self): pass and injecting it between calls and defining self.call_mixin(). Again inelegant and will break on multiple MixIns.. Any ideas?

    Read the article

  • How to define a batching routing service in WCF

    - by mattx
    I have designed a custom silverlight wcf channel that I want to leverage to selectively batch calls from the client to the server and possibly to cache on the client and short circuit calls. So far I'm just using this channel as a transport and sending the generic WCF messages that result to the WCF router service example here http://msdn.microsoft.com/en-us/magazine/cc500646.aspx?pr=blog to prototype this on the server side. So my scenario looks like this: IFooClient-MyTransportChannel-IRouterService-IFooService-Return I now need to be able to send more than one message per call through the router and carve them up and service them on the server side. Since this is just an experiment and I'm taking baby steps I will dispatch and service all the messages right away on the server side and return the batch of results. Immediately I noticed that simply making the router interface take Message[] instead of Message doesn't work due to serialization problems. I guess this makes sense. I'm not sure soap envelopes can contain other soap envelopes etc. Is there a simple way to take a collection of WCF Message objects and send them to a single method on a service where they can be split up and forwarded as appropriate? If not I'd love suggestions on how I should approach this. I want to have minimal work to do on the router service side so the goal should be to get as close to being able to "slice and forward" as possible.

    Read the article

  • jQuery/ajax working on IIS5.1 but not IIS6

    - by Mikejh99
    I'm running a weird issue here. I have code that makes jquery ajax calls to a web service and dynamically adds controls using jquery. Everything works fine on my dev machine running IIS 5.1, but not when deployed to IIS 6. I'm using VS2010/ASP.Net 4.0, C#, jQuery 1.4.2 and jQuery UI 1.8.1. I'm using the same browser for each. It partially works though. The code will add the controls to the page, but they aren't visible until I click them (they aren't visible though). I thought this was a css issue, but the styles are there too. The ajax calls look like this: $.ajax({ url: "/WebServices/AssetManager.asmx/Assets", type: "POST", datatype: "json", async: false, data: "{'q':'" + req.term + "', 'type':'Condition'}", contentType: "application/javascript; charset=utf-8", success: function (data) { res($.map(data.d, function (item) { return { label: item.Name, value: item.Name, id: item.Id, datatype: item.DataType } })) } }) Changing the content-type makes the autocomplete fail. I've quadruple checked and all the paths are correct, there is no document footer enabled in IIS, and I'm not using IIS compression. Any idea why the page will display and work properly in IIS 5 but only partially in IIS 6? (If it failed completely, that'd make more sense!). Is it a jQuery or CSS issue?

    Read the article

  • How to make an AJAX call immediately on document loading

    - by Ankur
    I want to execute an ajax call as soon as a document is loaded. What I am doing is loading a string that contains data that I will use for an autocomplete feature. This is what I have done, but it is not calling the servlet. I have removed the calls to the various JS scripts to make it clearer. I have done several similar AJAX calls in my code but usually triggered by a click event, I am not sure what the syntax for doing it as soon as the document loads, but I thought this would be it (but it's not): <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <script type="text/javascript"> $(document).ready(function(){ $.ajax({ type: "GET", url: "AutoComplete", dataType: 'json', data: queryString, success: function(data) { var dataArray = data; alert(dataArray); } }); $("#example").autocomplete(dataArray); }); </script> <title></title> </head> <body> API Reference: <form><input id="example"> (try "C" or "E")</form> </body> </html>

    Read the article

  • callbacks via objective-c selectors

    - by codemonkey
    I have a "BSjax" class that I wrote that lets me make async calls to our server to get json result sets, etc using the ASIHTTPRequest class. I set it up so that the BSjax class parses my server's json response, then passes control back to the calling view controller via this call: [[self delegate] performSelectorOnMainThread:@selector(bsRequestFinished:) withObject:self waitUntilDone:YES]; ... where "bsRequestFinished" is the callback method in the calling view controller. This all worked fine and well until I realized that some pages are going to need to make different types of requests... i.e. I'll want to do different types of things in that callback function depending on which type of request was made. To me it seems like being able to pass different callback function names to my BSjax class would be the cleanest fix... but I'm having trouble (and am not even sure if it's possible) to pass in a variable that holds the callback function name and then replace the call above with something like this: [[self delegate] performSelectorOnMainThread:@selector(self.variableCallbackFunctionName) withObject:self waitUntilDone:YES]; ... where "self.variableCallbackFunctionName" is set by the calling view controller when it calls BSjax to make a new request. Is this even possible? If so, advisable? If not, alternatives? EDIT: Note that whatever fix I arrive at will need to take into account the reality that this class is making async requests... so I need to make sure that the callback function processing is correctly tied to the specific requests... as I can't rely on FIFO processing sequence.

    Read the article

  • Why do .NET developers offer 32-bit/64-bit versions of .NET assemblies?

    - by Tyler
    Evey now and then I see both x86 and x64 versions of a .NET assembly. Consider the following web part for SharePoint. Why wouldn't the developer just offer a single version and have let the JIT compiler sort out the rest? When I see these kinds offering is it just that the developer decided to create a native image using a tool like ngen in order to avoid a JIT? Someone please help me out here, I feel like I'm missing something of note. Updated From what I got below, both x86 and x64 builds are offered because one or more of the following reasons: The developer wanted to avoid JITing and created a native image of his code, targeting a given architecture using a tool like ngen.exe. The assembly contains platform specific COM calls and so it makes no point to build it as AnyCPU. In these cases builds that target different platforms may contain different code. The assembly may contain Win32 calls using pinvoke which won't get remapped by a JIT and so the build should target the platform it is bound to.

    Read the article

  • Why is this Java Calendar comparison bad?

    - by joe7pak
    Hello folks. I'm having an inexplicable problem with the Java Calendar class when I'm trying to compare to dates. I'm trying to compare to Calendars and determine if their difference is than 1 day, and do things bases on that difference or not. But it doesn't work. If I do this with the two dates: String currDate = aCurrentUTCCalendar.getTime().toString(); String localDate = aLocalCalendar.getTime().toString(); I get these results: currDate = "Thu Jan 06 05:58:00 MST 2010" localDate = "Tue Jan 05 00:02:00 MST 2010" This is correct. But if I do this: long curr = aCurrentUTCCalendar.getTime().getTime(); long local = aLocalCalendar.getTime().getTime(); I get these results: ( in milliseconds since the epoch ) curr = -125566110120000 local = 1262674920000 Since there is only about a 30 hour different between the two, the magnitudes are vastly different, not to mention that annoying negative sign. This causes problems if I do this: long day = 60 * 60 * 24 * 1000; // 86400000 millis, one day if( local - curr > day ) { // do something } What's wrong? Why are the getTime().toString() calls correct, but the getTime().getTime() calls are vastly different? I'm using jdk 1.6_06 on WinXP. I can't upgrade the JDK for various reasons.

    Read the article

  • Calling into a saved java object via JNI from a different thread

    - by Drake Amara
    I have a java object which calls into a C++ shared object via JNI. In C++, I am saving a reference to the JNIEnv and jObject. JavaVM * jvm; JNIEnv * myEnv; jobject myobj; JNIEXPORT void JNICALL Java_org_api_init (JNIEnv *env, jobject jObj) { myEnv = env; myobj = jObj; } I also have a GLSurface renderer and it eventually calls the C++ shared object mentioned above on a different thread, the GLThread. I am then trying to call back into my original Java object using the jobject I saved initially, but I think because I am on the GLThread, I get the following error. W/dalvikvm(16101): JNI WARNING: 0x41ded218 is not a valid JNI reference I/dalvikvm(16101): "GLThread 981" prio=5 tid=15 RUNNABLE I/dalvikvm(16101): | group="main" sCount=0 dsCount=0 obj=0x41d6e220 self=0x5cb11078 I/dalvikvm(16101): | sysTid=16133 nice=0 sched=0/0 cgrp=apps handle=1555429136 I/dalvikvm(16101): | schedstat=( 0 0 0 ) utm=42 stm=32 core=1 The code calling back into Java : void setData() { jvm->AttachCurrentThread(&myEnv, 0); jclass javaClass = myEnv->FindClass("com/myapp/myClass"); if(javaClass == NULL){ LOGD("ERROR - cant find class"); } jmethodID method = myEnv->GetMethodID(javaClass, "updateDataModel", "()V"); if(method == NULL){ LOGD("ERROR - cant access method"); } // this works, but its a new java object //jobject myobj2 = myEnv->NewObject(javaClass, method); //this is where the crash occurs myEnv->CallVoidMethod(myobj, method, NULL); } If instead I create a new jObject using env-NewObject, I can succuessfully call back into Java, but it is a new object and I dont want that. I need to get back to my original Java Object. Is it a matter of switching threads before I call back into Java? If so, how do I do so ?

    Read the article

  • Releasing Autoreleasepool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • Method having an abstract class as a parameter

    - by Ferhat
    I have an abstract class A, where I have derived the classes B and C. Class A provides an abstract method DoJOB(), which is implemented by both derived classes. There is a class X which has methods inside, which need to call DoJOB(). The class X may not contain any code like B.DoJOB() or C.DoJOB(). Example: public class X { private A foo; public X(A concrete) { foo = concrete; } public FunnyMethod() { foo.DoJOB(); } } While instantiating class X I want to decide which derived class (B or C) must be used. I thought about passing an instance of B or C using the constructor of X. X kewl = new X(new C()); kewl.FunnyMethod(); //calls C.DoJOB() kewl = new X(new B()); kewl.FunnyMethod(); // calls B.DoJOB() My test showed that declaring a method with a parameter A is not working. Am I missing something? How can I implement this correctly? (A is abstract, it cannot be instantiated)

    Read the article

  • How do I get C# to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

    Read the article

  • Extremely CPU Intensive Alarm Clock

    - by SoulBeaver
    For some reason my program, a console alarm clock I made for laughs and practice, is extremely CPU intensive. It consumes about 2mB RAM, which is already quite a bit for such a small program, but it devastates my CPU with over 50% resources at times. Most of the time my program is doing nothing except counting down the seconds, so I guess this part of my program is the one that's causing so much strain on my CPU, though I don't know why. If it is so, could you please recommend a way of making it less, or perhaps a library to use instead if the problem can't be easily solved? /* The wait function waits exactly one second before returning to the * * called function. */ void wait( const int &seconds ) { clock_t endwait; // Type needed to compare with clock() endwait = clock() + ( seconds * CLOCKS_PER_SEC ); while( clock() < endwait ) {} // Nothing need be done here. } In case anybody browses CPlusPlus.com, this is a genuine copy/paste of the clock() function they have written as an example for clock(). Much why the comment //Nothing need be done here is so lackluster. I'm not entirely sure what exactly clock() does yet. The rest of the program calls two other functions that only activate every sixty seconds, otherwise returning to the caller and counting down another second, so I don't think that's too CPU intensive- though I wouldn't know, this is my first attempt at optimizing code. The first function is a console clear using system("cls") which, I know, is really, really slow and not a good idea. I will be changing that post-haste, but, since it only activates every 60 seconds and there is a noticeable lag-spike, I know this isn't the problem most of the time. The second function re-writes the content of the screen with the updated remaining time also only every sixty seconds. I will edit in the function that calls wait, clearScreen and display if it's clear that this function is not the problem. I already tried to reference most variables so they are not copied, as well as avoid endl as I heard that it's a little slow compared to \n.

    Read the article

  • javascript watching for variable change via a timer

    - by DA
    I have a slide show where every 10 seconds the next image loads. Once an image loads, it waits 10 seconds (setTimeout) and then calls a function which fades it out, then calls the function again to fade in the next image. It repeats indefinitely. This works. However, at times, I'm going to want to over-ride this "function1 call - pause - function2 call - function1 call" loop outside of this function (for instance, I may click on a slide # and want to jump directly to that slide and cancel the current pause). One solution seems to be to set up a variable, allow other events to change that variable and then in my original function chain, check for any changes before continuing the loop. I have this set up: var ping = 500; var pausetime = 10000; for(i=0; i<=pausetime; i=i+ping){ setTimeout(function(){ console.log(gocheckthevariable()); console.log(i); pseudoIf('the variable is true AND the timer is done then...do the next thing') },i) } The idea is that I'd like to pause for 10 seconds. During this pause, every half second I'll check to see if this should be stopped. The above will write out to the console the following every half second: true 10000 true 10000 true 10000 etc... But what I want/need is this: true 0 true 500 true 1000 etc... The basic problem is that the for loop executes before each of the set timeouts. While I can check the value of the variable every half second, I can't match that against the current loop index. I'm sure this is a common issue, but I'm at a loss what I should actually be searching for.

    Read the article

  • Are there equivalents to Ruby's method_missing in other languages?

    - by Justin Ethier
    In Ruby, objects have a handy method called method_missing which allows one to handle method calls for methods that have not even been (explicitly) defined: Invoked by Ruby when obj is sent a message it cannot handle. symbol is the symbol for the method called, and args are any arguments that were passed to it. By default, the interpreter raises an error when this method is called. However, it is possible to override the method to provide more dynamic behavior. The example below creates a class Roman, which responds to methods with names consisting of roman numerals, returning the corresponding integer values. class Roman def romanToInt(str) # ... end def method_missing(methId) str = methId.id2name romanToInt(str) end end r = Roman.new r.iv #=> 4 r.xxiii #=> 23 r.mm #=> 2000 For example, Ruby on Rails uses this to allow calls to methods such as find_by_my_column_name. My question is, what other languages support an equivalent to method_missing, and how do you implement the equivalent in your code?

    Read the article

  • Modules vs. Classes and their influence on descendants of ActiveRecord::Base

    - by Chris
    Here's a Ruby OO head scratcher for ya, brought about by this Rails scenario: class Product < ActiveRecord::Base has_many(:prices) # define private helper methods end module PrintProduct attr_accessor(:isbn) # override methods in ActiveRecord::Base end class Book < Product include PrintProduct end Product is the base class of all products. Books are kept in the products table via STI. The PrintProduct module brings some common behavior and state to descendants of Product. Book is used inside fields_for blocks in views. This works for me, but I found some odd behavior: After form submission, inside my controller, if I call a method on a book that is defined in PrintProduct, and that method calls a helper method defined in Product, which in turn calls the prices method defined by has_many, I'll get an error complaining that Book#prices is not found. Why is that? Book is a direct descendant of Product! More interesting is the following.. As I developed this hierarchy PrintProduct started to become more of an abstract ActiveRecord::Base, so I thought it prudent to redefine everything as such: class Product < ActiveRecord::Base end class PrintProduct < Product end class Book < PrintProduct end All method definitions, etc. are the same. In this case, however, my web form won't load because the attributes defined by attr_accessor (which are "virtual attributes" referenced by the form but not persisted in the DB) aren't found. I'll get an error saying that there is no method Book#isbn. Why is that?? I can't see a reason why the attr_accessor attributes are not found inside my form's fields_for block when PrintProduct is a class, but they are found when PrintProduct is a Module. Any insight would be appreciated. I'm dying to know why these errors are occurring!

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >