Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 327/1086 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Recursion in Unity and Dispose pattern implementation

    - by Budda
    My class is inherited from UnityContainer (from Unity 2.0), here is source code: public class UnityManager : UnityContainer { private UnityManager() { _context = new MyDataClassesDataContext(); // ... } protected override void Dispose(bool disposing) { if ( disposing ) { _context.Dispose(); } base.Dispose(disposing); } private readonly CMCoreDataClassesDataContext _context; } When Dispose method is called for the instance of UnityManager class it drop into recursion... Why? As far as I know base.Dispose should call the Dispose method of base class only... isn't it? Who call back the Dispose(bool) of UnityManager? How to prevent that? Thanks.

    Read the article

  • Android: Is it better to start and stop a service each time it is needed or to let a service run and

    - by Flo
    I'm developing an app that checks several conditions during an incoming phone call. The main parts of the app are a BroadcastReceiver listening for Intents related to the phone's status and a local Service checking the conditions. At the moment the service is started each time an incoming call is detected and is stopped when the phone status changed back to idle. Now I'm wondering if this procedure is correct and whether it is reasonable to start and stop the service related to the phone's status. Or would it be better to let the service run regardless of the phone's status and bind/unbind to/from it when needed. Are there any performance issues I would have to think about? Perhaps it is more expensive to start/stop a service than letting it run and communicate with it. Are there any best practices out there regarding the implementation of services?

    Read the article

  • How to import a WCF web service using a Java client

    - by JRP
    I have a WCF web service using wsHttpBinding that I am consuming from a Java client. I generated code from the WSDL using wsimport. The java client appears to be creating the service fine but when I call a method on the service the client just spins. MyService s = new MyService(); IMyService i = s.getWSHttpBindingIMyService(); returnedValue = i.getSomething(2); // method call Can a java client communicate with a WCF webservice that is using wsHttpBinding? I have read that I might need to use WSIT (Metro) but am confused on how to proceed with that. Any help will be appreciated.

    Read the article

  • What's the right way to kill child processes in perl before exiting?

    - by rarbox
    I'm running an IRC Bot (Bot::BasicBot) which has two child processes running File::Tail but when exiting, they don't terminate. So I'm killling them using Proc::ProcessTable like this before exiting: my $parent=$$; my $proc_table=Proc::ProcessTable->new(); for my $proc (@{$proc_table->table()}) { kill(15, $proc->pid) if ($proc->ppid == $parent); } It works but I get this warning: 14045: !!! Child process PID:14047 reaped: 14045: !!! Child process PID:14048 reaped: 14045: !!! Your program may not be using sig_child() to reap processes. 14045: !!! In extreme cases, your program can force a system reboot 14045: !!! if this resource leakage is not corrected. What else can I do to kill child processes? The forked process is created using the forkit method in Bot::BasicBot.

    Read the article

  • How to stop endless EJB 3 timer ?

    - by worldpython
    Hi, I am new to EJB 3 . I use the following code to start endless EJB 3 timer then deploying it on JBOSS 4.2.3 @Stateless public class SimpleBean implements SimpleBeanRemote,TimerService { @Resource TimerService timerService; private Timer timer ; @Timeout public void timeout(Timer timer) { System.out.println("Hello EJB"); } } then calling it timer = timerService.createTimer(10, 5000, null); It works well. I created a client class that calls a method that creates the timer and a method that is called when the timer times out. I forget to call cancel then it does not stop .redeploy with cancel call never stop it. restart Jboss 4.2.3 never stop it. How I can stop EJB timer ? Thanks for helping.

    Read the article

  • How to set the Cell height as dynamically in iPhone?

    - by Pugal Devan
    Hi Friends, In my apps, i have created table view and displayed the images and the labels in the table view. I have downloaded the image and set the image height into the another class, so i have set the image height as delegate value. In my problem is, heightForRowAtIndexPath method is call very earlier and it never calls again. So how i set the cell height depends on the image height? - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { In Console: the height of the image is --------------------------0.000000 the height of the image is --------------------------0.000000 the height of the image is --------------------------0.000000 } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { In Console: the height of the image is --------------------------98.567039 the height of the image is --------------------------300.000000 the height of the image is --------------------------232.288406 } Is any possible to call the heightforRowAtIndexPath method inside the cellForRowAt method. Because i have set the cell frame size and height depends on the image height. So how can i set the cell height depends on the image height?, please guide me. Thanks!

    Read the article

  • Converting python objects for rpy2

    - by bgbg
    The following code is supposed to created a heatmap in rpy2 import numpy as np from rpy2.robjects import r data = np.random.random((10,10)) r.heatmap(data) However, it results in the following error Traceback (most recent call last): File "z.py", line 8, in <module> labRow=rowNames, labCol=colNames) File "C:\Python25\lib\site-packages\rpy2\robjects\__init__.py", line 418, in __call__ new_args = [conversion.py2ri(a) for a in args] File "C:\Python25\lib\site-packages\rpy2\robjects\__init__.py", line 93, in default_py2ri raise(ValueError("Nothing can be done for the type %s at the moment." %(type(o)))) ValueError: Nothing can be done for the type <type 'numpy.ndarray'> at the moment. From the documentation I learn that r.heatmap expects "a numeric matrix". How do I convert np.array to the required data type?

    Read the article

  • How to create a function with vectors and vertical asymptote using MATLAB

    - by Pedro J Trinidad
    Plot the function f(x) = 1.5x / x-4 for -10 equal or less than X equal or less than 10. Notice that the function have a vertical asymptote at x = 4. Plot the function by creating two vectors for the domain of x. The first vector (call it x1) with elements from -10 to 3.7, and the second vector (calle it x2) with elements from 4.3 to 10. For each of the X vector create a Y vector (call them y1 and y2) with the corresponding values of Y according to the function. To plot the function make two curves in the same plot (y1 vs. x1 and y2 vs. x2).

    Read the article

  • Oracle EXECUTE IMMEDIATE changes explain plan of query.

    - by Gunny
    I have a stored procedure that I am calling using EXECUTE IMMEDIATE. The issue that I am facing is that the explain plan is different when I call the procedure directly vs when I use EXECUTE IMMEDIATE to call the procedure. This is causing the execution time to increase 5x. The main difference between the plans is that when I use execute immediate the optimizer isn't unnesting the subquery (I'm using a NOT EXISTS condition). We are using Rule Based Optimizer here at work. Example: Fast: begin package.procedure; end; / Slow: begin execute immediate 'begin package.' || proc_name || '; end;'; end; /

    Read the article

  • Stack and queue operations on the same array.

    - by Passonate Learner
    Hi. I've been thinking about a program logic, but I cannot draw a conclusion to my problem. Here, I've implemented stack and queue operations to a fixed array. int A[1000]; int size=1000; int top; int front; int rear; bool StackIsEmpty() { return (top==0); } bool StackPush( int x ) { if ( top >= size ) return false; A[top++] = x; return true; } int StackTop( ) { return A[top-1]; } bool StackPop() { if ( top <= 0 ) return false; A[--top] = 0; return true; } bool QueueIsEmpty() { return (front==rear); } bool QueuePush( int x ) { if ( rear >= size ) return false; A[rear++] = x; return true; } int QueueFront( ) { return A[front]; } bool QueuePop() { if ( front >= rear ) return false; A[front++] = 0; return true; } It is presumed(or obvious) that the bottom of the stack and the front of the queue is pointing at the same location, and vice versa(top of the stack points the same location as rear of the queue). For example, integer 1 and 2 is inside an array in order of writing. And if I call StackPop(), the integer 2 will be popped out, and if I call QueuePop(), the integer 1 will be popped out. My problem is that I don't know what happens if I do both stack and queue operations on the same array. The example above is easy to work out, because there are only two values involved. But what if there are more than 2 values involved? For example, if I call StackPush(1); QueuePush(2); QueuePush(4); StackPop(); StackPush(5); QueuePop(); what values will be returned in the order of bottom(front) from the final array? I know that if I code a program, I would receive a quick answer. But the reason I'm asking this is because I want to hear a logical explanations from a human being, not a computer.

    Read the article

  • When binding a client TCP socket to a specific local port with Winsock, SO_REUSEADDR does not have a

    - by Checkers
    I'm binding a client TCP socket to a specific local port. To handle the situation where the socket remains in TIME_WAIT state for some time, I use setsockopt() with SO_REUSEADDR on a socket. It works on Linux, but does not work on Windows, I get WSAEADDRINUSE on connect() call when the previous connection is still in TIME_WAIT. MSDN is not exactly clear what should happen with client sockets: [...] For server applications that need to bind multiple sockets to the same port number, consider using setsockopt (SO_REUSEADDR). Client applications usually need not call bind at all—connect chooses an unused port automatically. [...] How do I avoid this?

    Read the article

  • Insert an ajaxified Webpart into an existing MOSS site

    - by mamoo
    Hi everybody, I need to code a webpart which purpose is to asynchronously fetch some documents and display them into an existing page. Unfortunately I have to face a lot of rescritcions and my struggle to find a solution seems useleess so far. 1) I cannot use Microsoft asp.net ajax 2) I must use Jsonp because the called service (page, whatever...) is outside the site's domain. That's not a big problem. 3) I have no possibility to alter the existing page code, so I cannot reference an external library such as JQuery. 4) For the same reason I have no possibility to call my methods on the window.onLoad event, so here the question is: how can I be sure that everything is correctly loaded before triggering my ajax call? 5) Since several instances of the same webpart can be placed into the same page, can there be some possible conflicts among the various js functions?

    Read the article

  • Winforms: calling entry form function from a different class

    - by samy
    I'm kinda new to programming and got a question on what is a good practice. I created a class that represents a ball and it has a function Jump() that use 2 timers and get the ball up and down. I know that in Winforms you got to call Invalidate() every time you want to repaint the screen, or part of it. I didn't find a good way to do that, so I reference the form in my class, and called Invalidate() inside my ball class every time I need to repaint to ball movement. (this works but I got a feeling that this is not a good practice) Here is the class I created: public class Ball { public Form1 parent;//----> here is the reference to the form public Rectangle ball; Size size; public Point p; Timer timerBallGoUp = new Timer(); Timer timerBallGDown = new Timer(); public int ballY; public Ball(Size _size, Point _p) { size = _size; p = _p; ball = new Rectangle(p, size); } public void Jump() { ballY = p.Y; timerBallGDown.Elapsed += ballGoDown; timerBallGDown.Interval = 50; timerBallGoUp.Elapsed += ballGoUp; timerBallGoUp.Interval = 50; timerBallGoUp.Start(); } private void ballGoUp(object obj,ElapsedEventArgs e) { p.Y++; ball.Location = new Point(ball.Location.X, p.Y); if (p.Y >= ballY + 50) { timerBallGoUp.Stop(); timerBallGDown.Start(); } parent.Invalidate(); // here i call parent.Invalidate() 1 } private void ballGoDown(object obj, ElapsedEventArgs e) { p.Y--; ball.Location = new Point(ball.Location.X, p.Y); if (p.Y <= ballY) { timerBallGDown.Stop(); timerBallGoUp.Start(); } parent.Invalidate(); // here i call parent.Invalidate() 2 } } I'm wondring if there is a better way to do that? (sorry for my english)

    Read the article

  • WCF - Network Cost

    - by Mubashar Ahmad
    Dear Devs I have a wcf service deployed on IIS with basicHttpBinding and aspNetCompatibilityEnabled=true I have a test client as well which invokes multiple service functions simultaneously. To check the performance of service call on client and server I calculated the Avg time it takes to complete a service request on client(in proxy code) and on server as well. after a test of 8 hrs (server and client were on the same machine) i came to know that average response time on client is around 34ms where as the Avg execution time on server is around 3ms so the difference is 31ms. I would like to know why every call is taking 31ms is it justified? and how can i reduce this?

    Read the article

  • Is it possible to do AJAX calls in a liquid template?

    - by Brian Armstrong
    I'm looking at the liquid templating language for Rails apps: http://wiki.github.com/tobi/liquid/ I'd like my users to also be able to make AJAX calls (just like the ones in rails for periodically_call_remote, observe_field, etc). Is this possible? Assuming the rails helpers can be added as filters, how will the user be able to modify what gets returned by the AJAX call? They cannot modify an rjs file on the server or anything like that. I suppose the AJAX call could return JSON (instead of rendered html) and then the javascript could use that to render something. But I'm having a little trouble envisioning how it would work exactly. If anyone can point me to an example of this or clarify it'd be much appreciated. Thanks!

    Read the article

  • Jqeury Validate() special function

    - by kevin
    Hi there, i'm using the Jquery validate() plugin for some forms and it goes great. The only thing is that i have an input field that requires a special validation process. Here is how it goes: The Jquery validate plugin is called in the domready for all the required fields. Here is an exemple for an input: <li> <label for="nome">Nome completo*</label> <input name="nome" type="text" id="nome" class="required"/> </li> And here is how i call my special function: <li> <span id="sprytextfield1"> <label for="cpf">CPF* (xxxxxxxxxxx)</label> <input name="cpf" type="text" id="cpf" maxlength="15" class="required" /> <span class="textfieldInvalidFormatMsg">CPF Inv&aacute;lido.</span> </span> </li> And at the bottom of the file i call the Spry function: <script type="text/javascript"> <!-- var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1","cpf"); //--> </script> Of course i call the Spry css and js files in the head section as well as my special-validate.js. When i just use the Jquery validate() plugin and click on the send button, the page goes automatically back to the first mistaken input field and shows the error type (not a number, not a valid email etc.). But with this new function, this "going-back-to-the-first-mistake" feature doesnt work, of course, because the validate() function sees it all good. I already added a rule for another form (about pictures upload) and it goes like this: $("#commentForm").validate({ rules: { foto34: { required: true, accept: "jpg|png|gif" } } }); Now my question is, how can i add the special validation function as a rule of the whole validation process ? Here is the page to understand it better: link text and the special field is the first one: CPF. Hope i was clear explaining my problem. Thanks in advance. kevin

    Read the article

  • Design pattern question: encapsulation or inheritance

    - by Matt
    Hey all, I have a question I have been toiling over for quite a while. I am building a templating engine with two main classes Template.php and Tag.php, with a bunch of extension classes like Img.php and String.php. The program works like this: A Template object creates a Tag objects. Each tag object determines which extension class (img, string, etc.) to implement. The point of the Tag class is to provide helper functions for each extension class such as wrap('div'), addClass('slideshow'), etc. Each Img or String class is used to render code specific to what is required, so $Img->render() would give something like <img src='blah.jpg' /> My Question is: Should I encapsulate all extension functionality within the Tag object like so: Tag.php function __construct($namespace, $args) { // Sort out namespace to determine which extension to call $this->extension = new $namespace($this); // Pass in Tag object so it can be used within extension return $this; // Tag object } function render() { return $this->extension->render(); } Img.php function __construct(Tag $T) { $args = $T->getArgs(); $T->addClass('img'); } function render() { return '<img src="blah.jpg" />'; } Usage: $T = new Tag("img", array(...); $T->render(); .... or should I create more of an inheritance structure because "Img is a Tag" Tag.php public static create($namespace, $args) { // Sort out namespace to determine which extension to call return new $namespace($args); } Img.php class Img extends Tag { function __construct($args) { // Determine namespace then call create tag $T = parent::__construct($namespace, $args); } function render() { return '<img src="blah.jpg" />'; } } Usage: $Img = Tag::create('img', array(...)); $Img->render(); One thing I do need is a common interface for creating custom tags, ie I can instantiate Img(...) then instantiate String(...), I do need to instantiate each extension using Tag. I know this is somewhat vague of a question, I'm hoping some of you have dealt with this in the past and can foresee certain issues with choosing each design pattern. If you have any other suggestions I would love to hear them. Thanks! Matt Mueller

    Read the article

  • OpenSSL "Seal" in C (or via shell)

    - by chpwn
    I'm working on porting some PHP code to C, that contacts a web API. The issue I've come across is that the PHP code uses the function openssl_seal(), but I can't seem to find any way to do the same thing in C or even via openssl in a call to system(). From the PHP manual on openssl_seal(): int openssl_seal ( string $data , string &$sealed_data , array &$env_keys , array $pub_key_ids ) openssl_seal() seals (encrypts) data by using RC4 with a randomly generated secret key. The key is encrypted with each of the public keys associated with the identifiers in pub_key_ids and each encrypted key is returned in env_keys . This means that one can send sealed data to multiple recipients (provided one has obtained their public keys). Each recipient must receive both the sealed data and the envelope key that was encrypted with the recipient's public key. What would be the best way to implement this? I'd really prefer not to call out to a PHP script every time, for obvious reasons.

    Read the article

  • MVC DateTime binding with incorrect date format

    - by Sam Wessel
    Asp.net-MVC now allows for implicit binding of DateTime objects. I have an action along the lines of public ActionResult DoSomething(DateTime startDate) { ... } This successfully converts a string from an ajax call into a DateTime. However, we use the date format dd/MM/yyyy; MVC is converting to MM/dd/yyyy. For example, submitting a call to the action with a string '09/02/2009' results in a DateTime of '02/09/2009 00:00:00', or September 2nd in our local settings. I don't want to roll my own model binder for the sake of a date format. But it seems needless to have to change the action to accept a string and then use DateTime.Parse if MVC is capable of doing this for me. Is there any way to alter the date format used in the default model binder for DateTime? Shouldn't the default model binder use your localisation settings anyway?

    Read the article

  • ResolveURL not resolving in a user control

    - by WebJunk
    I'm trying to use ResolveUrl() to set some paths in the code behind of a custom ASP.NET user control. The user control contains a navigation menu. I'm loading it on a page that's loading a master page. When I call ResolveUrl("~") in my user control it returns "~" instead of the root of the site. When I call it in a page I get the root path as expected. I've stepped through with the debugger and confirmed, ResolveUrl("~") returns "~" in my user control code behind. Is there some other way I should be calling the function in my user control code behind to get the root path of the site?

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Address Book callback not called

    - by Oliver
    I have an iPhone app that makes use of the AddressBook.framework and uses Core Data to store these contacts. In order to make sure I update my own database when the Address Book is updated (whether via MobileMe or editing within my own app), I am subscribing to the notification as to when the Address Book is updating. I call this on startup: ABAddressBookRef book = ABAddressBookCreate(); ABAddressBookRegisterExternalChangeCallback(book, addressBookChanged, self); Which (supposedly) calls this on any editing. I have an ABPersonViewController which allows editing, and addressBookChanged never seems to get called. void addressBookChanged(ABAddressBookRef reference, CFDictionaryRef dictionary, void *context) { // The contacts controller we need to call ContactsController *contacts = (ContactsController *)context; // Sync with the Address Book [contacts synchronizeWithAddressBook:reference]; } Is there any reason for it to not be called?

    Read the article

  • How to fetch output when calling R using Qprocess or system

    - by SYK
    Hi Experts, I would like to execute a R script simply as R --file=x.R It runs well on the command line. However when I try the system call in C++ by QProcess::execute("R --file=x.R"); or system("R --file=x.R"); the program R runs and quits but I can't see the output the program is supposed to generate. If a program uses no stdout (such as R), how do I fetch the output after a system call either as a output file or in the program's own console? Thanks for your time.

    Read the article

  • Exposing C# COM server events to Delphi client applications

    - by hectorsosajr
    My question is very similar to these two: http://stackoverflow.com/questions/1140984/c-component-events http://stackoverflow.com/questions/1638372/c-writing-a-com-server-events-not-firing-on-client However, what worked for them is not working for me. The type library file, does not have any hints of events definitions, so Delphi doesn't see it. The class works fine for other C# applications, as you would expect. COM Server tools: Visual Studio 2010 .NET 4.0 Delphi applications: Delphi 2010 Delphi 7 Here's a simplified version of the code: /// <summary> /// Call has arrived delegate. /// </summary> [ComVisible(false)] public delegate void CallArrived(object sender, string callData); /// <summary> /// Interface to expose SimpleAgent events to COM /// </summary> [ComVisible(true)] [GuidAttribute("1FFBFF09-3AF0-4F06-998D-7F4B6CB978DD")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] public interface IAgentEvents { ///<summary> /// Handles incoming calls from the predictive manager. ///</summary> ///<param name="sender">The class that initiated this event</param> ///<param name="callData">The data associated with the incoming call.</param> [DispId(1)] void OnCallArrived(object sender, string callData); } /// <summary> /// Represents the agent side of the system. This is usually related to UI interactions. /// </summary> [ComVisible(true)] [GuidAttribute("EF00685F-1C14-4D05-9EFA-538B3137D86C")] [ClassInterface(ClassInterfaceType.None)] [ComSourceInterfaces(typeof(IAgentEvents))] public class SimpleAgent { /// <summary> /// Occurs when a call arrives. /// </summary> public event CallArrived OnCallArrived; public SimpleAgent() {} public string AgentName { get; set; } public string CurrentPhoneNumber { get; set; } public void FireOffCall() { if (OnCallArrived != null) { OnCallArrived(this, "555-123-4567"); } } } The type library file has the definitions for the properties and methods, but no events are visible. I even opened the type library in Delphi's viewer to make sure. The Delphi app can see and use any property, methods, and functions just fine. It just doesn't see the events. I would appreciate any pointers or articles to read. Thanks!

    Read the article

  • Calling a method in a view controller from a view

    - by Lakshmie
    I have to invoke a method present in a view controller who's reference is available in the view. When I try to call the method like any other method, for some reason, iPhone just ignores the call. Can somebody explain as to why this happens and also how can I go about invoking this method? In the view I have this method: -(void) touchesBegan :(NSSet *) touches withEvent:(UIEvent *)event{ NSArray* mySubViews = [self subviews]; for (UITouch *touch in touches) { int i = 0; for(; i<[mySubViews count]; i++){ if(CGRectContainsPoint([[mySubViews objectAtIndex:i] frame], [touch locationInView:self])){ break; } } if(i<[mySubViews count]){ // viewController is the reference to the View Controller. [viewController pointToSummary:[touch locationInView:self].y]; NSLog(@"Helloooooo"); break; } } } Whenever the touches event is triggered, Hellooooo gets printed in the console but the method before that is simply ignored

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >