Search Results

Search found 1382 results on 56 pages for 'async await'.

Page 5/56 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Some Async Socket Code - Help with Garbage Collection?

    - by divinci
    Hi all, I think this question is really about my understanding of Garbage collection and variable references. But I will go ahead and throw out some code for you to look at. // Please note do not use this code for async sockets, just to highlight my question // SocketTransport // This is a simple wrapper class that is used as the 'state' object // when performing Async Socket Reads/Writes public class SocketTransport { public Socket Socket; public byte[] Buffer; public SocketTransport(Socket socket, byte[] buffer) { this.Socket = socket; this.Buffer = buffer; } } // Entry point - creates a SocketTransport, then passes it as the state // object when Asyncly reading from the socket. public void ReadOne(Socket socket) { SocketTransport socketTransport_One = new SocketTransport(socket, new byte[10]); socketTransport_One.Socket.BeginRecieve ( socketTransport_One.Buffer, // Buffer to store data 0, // Buffer offset 10, // Read Length SocketFlags.None // SocketFlags new AsyncCallback(OnReadOne), // Callback when BeginRead completes socketTransport_One // 'state' object to pass to Callback. ); } public void OnReadOne(IAsyncResult ar) { SocketTransport socketTransport_One = ar.asyncState as SocketTransport; ProcessReadOneBuffer(socketTransport_One.Buffer); // Do processing // New Read // Create another! SocketTransport (what happens to first one?) SocketTransport socketTransport_Two = new SocketTransport(socket, new byte[10]); socketTransport_Two.Socket.BeginRecieve ( socketTransport_One.Buffer, 0, 10, SocketFlags.None new AsyncCallback(OnReadTwo), socketTransport_Two ); } public void OnReadTwo(IAsyncResult ar) { SocketTransport socketTransport_Two = ar.asyncState as SocketTransport; .............. So my question is: The first SocketTransport to be created (socketTransport_One) has a strong reference to a Socket object (lets call is ~SocketA~). Once the async read is completed, a new SocketTransport object is created (socketTransport_Two) also with a strong reference to ~SocketA~. Q1. Will socketTransport_One be collected by the garbage collector when method OnReadOne exits? Even though it still contains a strong reference to ~SocketA~ Thanks all!

    Read the article

  • Silverlight Async Design Pattern Issue

    - by Mike Mengell
    I'm in the middle of a Silverlight application and I have a function which needs to call a webservice and using the result complete the rest of the function. My issue is that I would have normally done a synchronous web service call got the result and using that carried on with the function. As Silverlight doesn't support synchronous web service calls without additional custom classes to mimic it, I figure it would be best to go with the flow of async rather than fight it. So my question relates around whats the best design pattern for working with async calls in program flow. In the following example I want to use the myFunction TypeId parameter depending on the return value of the web service call. But I don't want to call the web service until this function is called. How can I alter my code design to allow for the async call? string _myPath; bool myFunction(Guid TypeId) { WS_WebService1.WS_WebService1SoapClient proxy = new WS_WebService1.WS_WebService1SoapClient(); proxy.GetPathByTypeIdCompleted += new System.EventHandler<WS_WebService1.GetPathByTypeIdCompleted>(proxy_GetPathByTypeIdCompleted); proxy.GetPathByTypeIdAsync(TypeId); // Get return value if (myPath == "\\Server1") { //Use the TypeId parameter in here } } void proxy_GetPathByTypeIdCompleted(object sender, WS_WebService1.GetPathByTypeIdCompletedEventArgs e) { string server = e.Result.Server; myPath = '\\' + server; } Thanks in advance, Mike

    Read the article

  • Multiple instances of the same Async task (Windows Phone)

    - by Bart Teunissen
    After googeling for ages, and reading some stuff about async task in books. I made a my first program with an async task in it. Only to find out, that i can only start one task. I want to run the task more then once. This is where i found out that that doesn't seem to work. to be a little bit clearer, here are some parts of my code: InitFunction(var); This is the Task itself public async Task InitFunction(string var) { _VarHandle = await _AdsClient.GetSymhandleByNameAsync(var); _Data = await _AdsClient.ReadAsync<T>(_VarHandle); _AdsClient.AddNotificationAsync<T>(_VarHandle, AdsTransmissionMode.OnChange, 1000, this); } This works like a charm when i execute the task only once.. But is there a possibility to run it multiple times. Something like this? InitFunction(var1); InitFunction(var2); InitFunction(var3); Because if i do this now (multiple tasks at once), the task it wants to start is still running, and it throws an exeption. if someone could help me with this, that would be awesome! ~ Bart

    Read the article

  • jQuery async ajax query and returning value problem (scope, closure)

    - by glebovgin
    Hi. Code not working because of async query and variable scope problem. I can't understand how to solve this. Change to $.ajax method with async:false - not an option. I know about closures, but how I can implement it here - don't know. I've seen all topics here about closures in js and jQuery async problems - but still nothing. Help, please. Here is the code: var map = null; var marker; var cluster = null; function refreshMap() { var markers = []; var markerImage = new google.maps.MarkerImage('/images/image-1_32_t.png', new google.maps.Size(32, 32)); $.get('/get_users.php',{},function(data){ if(data.status == 'error') return false; var users = data.users; // here users.length = 1 - this is ok; for(var i in users) { //here I have every values from users - ok var latLng = new google.maps.LatLng(users[i].lat, users[i].lng); var mark = new google.maps.Marker({ position: latLng, icon: markerImage }); markers.push(mark); alert(markers.length); // length 1 } },'json'); alert(markers.length); // length 0 //if I have alert() above - I get result cluster = new MarkerClusterer(map, markers, { maxZoom: null, gridSize: null }); } Thanks.

    Read the article

  • Why is HttpClient's GetStringAsync is unbelivable slow?

    - by Jason94
    I have a Windows Phone 8 project where I've taken to use the PCL (Portable Class Library) project too since I'm going to build a Win8 app to. However, while calling my api (in Azure) my HttpClient's GetStringAsync is so slow. I threw in a couple of debugs with datetime and GetStringAsync took like 14 seconds! And sometimes it takes longer. What I'm doing is retrieving simple JSON from my Azure API site. My Android client has no problem with getting that same data in a split second... so is there something I'm missing? The setup is pretty straight forward: HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); client.DefaultRequestHeaders.Add("X-Token", "something"); string responseJSON = await client.GetStringAsync("url"); I've places the debug times right before and after the await there, in between it is 14 seconds! Does someone know why?

    Read the article

  • async tree jquery easy ui

    - by user765368
    I'm trying to create an Async Tree in jQuery Easy Ui. I understand the idea behind it, but will this work if my root node is not a node that is coming from the database or something (therefore, it does not have any id). My root node is some node that I define myself. How would I make an async tree in jquery easy ui to load a bunch of nodes as children of my root node (which, again has no id because it's not coming from the database. My children nodes are coming from the database though). I hope y'all understand what I'm trying to do here. Any help please

    Read the article

  • Boost asio async vs blocking reads, udp speed/quality

    - by Dolphin
    I have a quick and dirty proof of concept app that I wrote in C# that reads high data rate multicast UDP packets from the network. For various reasons the full implementation will be written in C++ and I am considering using boost asio. The C# version used a thread to receive the data using blocking reads. I had some problems with dropped packets if the computer was heavily loaded (generally with processing those packets in another thread). What I would like to know is if the async read operations in boost (which use overlapped io in windows) will help ensure that I receive the packets and/or reduce the cpu time needed to receive the packets. The single thread doing blocking reads is pretty straightforward, using the async reads seems like a step up in complexity, but I think it would be worth it if it provided higher performance or dropped fewer packets on a heavily loaded system. Currently the data rate should be no higher than 60Mb/s.

    Read the article

  • Silverlight, dealing with Async calls

    - by Billy
    Hi there, I have some code as below: foreach (var position in mAllPositions) { DoAsyncCall(position); } //I want to execute code here after each Async call has finished So how can I do the above? I could do something like: while (count < mAllPositions.Count) { //Run my code here } and increment count after each Async call is made ... but this doesn't seem like a good way of doing it Any advice? Is there some design pattern for the above problem as i'm sure it's a common scenario?

    Read the article

  • Drag and Drop in Silverlight with F# and Asynchronous Workflows

    - by knotig
    Hello everyone! I'm trying to implement drag and drop in Silverlight using F# and asynchronous workflows. I'm simply trying to drag around a rectangle on the canvas, using two loops for the the two states (waiting and dragging), an idea I got from Tomas Petricek's book "Real-world Functional Programming", but I ran into a problem: Unlike WPF or WinForms, Silverlight's MouseEventArgs do not carry information about the button state, so I can't return from the drag-loop by checking if the left mouse button is no longer pressed. I only managed to solve this by introducing a mutable flag. Would anyone have a solution for this, that does not involve mutable state? Here's the relevant code part (please excuse the sloppy dragging code, which snaps the rectangle to the mouse pointer): type MainPage() as this = inherit UserControl() do Application.LoadComponent(this, new System.Uri("/SilverlightApplication1;component/Page.xaml", System.UriKind.Relative)) let layoutRoot : Canvas = downcast this.FindName("LayoutRoot") let rectangle1 : Rectangle = downcast this.FindName("Rectangle1") let mutable isDragged = false do rectangle1.MouseLeftButtonUp.Add(fun _ -> isDragged <- false) let rec drag() = async { let! args = layoutRoot.MouseMove |> Async.AwaitEvent if (isDragged) then Canvas.SetLeft(rectangle1, args.GetPosition(layoutRoot).X) Canvas.SetTop(rectangle1, args.GetPosition(layoutRoot).Y) return! drag() else return() } let wait() = async { while true do let! args = Async.AwaitEvent rectangle1.MouseLeftButtonDown isDragged <- true do! drag() } Async.StartImmediate(wait()) () Thank you very much for your time!

    Read the article

  • generic async loading method for page web scripts?

    - by boomhauer
    The google analytics code went to an async load model some time back. I've noticed that a lot of the other scripts I use on many sites are causing slow load times - specifically the addthis script and the facebook like button. I'm noticing that the slow load times of these scripts is causing the google bot to calc my page loadtimes as being much slower than previously. I'd like to know if there is a standard/generic way of causing these scripts to load async as well, or perhaps a pointer to someone who has done the work for this already. Seems this would be a popular thing to do, but not much luck searching around.

    Read the article

  • Ruby and Rails Async

    - by Edijs Petersons
    I need to perform long-running operation in ruby/rails asynchronously. Googling around one of the options I find is Sidekiq. class WeeklyReportWorker include Sidekiq::Worker def perform(user, product, year = Time.now.year, week = Date.today.cweek) report = WeeklyReport.build(user, product, year, week) report.save end end # call WeeklyReportWorker.perform_async('user', 'product') Everything works great! But there is a problem. If I keep calling this async method every few seconds, but the actual time heavy operation performs is one minute things won't work. Let me put it in example. 5.times { WeeklyReportWorker.perform_async('user', 'product') } Now my heavy operation will be performed 5 times. Optimally it should have performed only once or twice depending on whether execution of first operaton started before 5th async call was made. Do you have tips how to solve it?

    Read the article

  • Perfect Forwarding to async lambda

    - by Alexander Kondratskiy
    I have a function template, where I want to do perfect forwarding into a lambda that I run on another thread. Here is a minimal test case which you can directly compile: #include <thread> #include <future> #include <utility> #include <iostream> #include <vector> /** * Function template that does perfect forwarding to a lambda inside an * async call (or at least tries to). I want both instantiations of the * function to work (one for lvalue references T&, and rvalue reference T&&). * However, I cannot get the code to compile when calling it with an lvalue. * See main() below. */ template <typename T> std::string accessValueAsync(T&& obj) { std::future<std::string> fut = std::async(std::launch::async, [](T&& vec) mutable { return vec[0]; }, std::forward<T>(obj)); return fut.get(); } int main(int argc, char const *argv[]) { std::vector<std::string> lvalue{"Testing"}; // calling with what I assume is an lvalue reference does NOT compile std::cout << accessValueAsync(lvalue) << std::endl; // calling with rvalue reference compiles std::cout << accessValueAsync(std::move(lvalue)) << std::endl; // I want both to compile. return 0; } For the non-compiling case, here is the last line of the error message which is intelligible: main.cpp|13 col 29| note: no known conversion for argument 1 from ‘std::vector<std::basic_string<char> >’ to ‘std::vector<std::basic_string<char> >&’ I have a feeling it may have something to do with how T&& is deduced, but I can't pinpoint the exact point of failure and fix it. Any suggestions? Thank you! EDIT: I am using gcc 4.7.0 just in case this could be a compiler issue (probably not)

    Read the article

  • Implementing a robust async stream reader

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output, but that is a detail and does not play an important role. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • Implementing a robust async stream reader for a console

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. Edit: There are comments below correctly stating that since the data is coming from a stream, there is absolutely nothing that the receiver can infer about the structure of the data unless it is fully prepared to parse it. What I am trying to do here is leverage the "flush the output" "structure" that the owner of the console imparts while writing on it. I am prepared to assume (better: allow my caller to have the option to assume) that the OS will pass me the data written between two flushes of the stream in exactly one piece. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • How can I use Windows Workflow for validation of a Silverlight application?

    - by Josh C.
    I want to use Windows Workflow to provide a validation service. The validation that will be provided may have multiple tiers with chaining and redirecting to other stages of validation. The application that will generate the data for validation is a Silverlight app. I imagine the validation will take longer than the blink of an eye, so I don't want to tie the user up. Instead, I would like the user to submit the current data for validation. If the validation happens quickly, the service will perform an asynchronous callback to the app. The viewmodel that made the call would receive the validation output and post into the view. If the validation takes a long time, the user can move forward in the Silverlight app, disregarding the potential output of the validation. The viewmodel that made the call would be gone. I expect there would be another viewmodel that would contain the current validation output in its model. The validation value would change causing the user to get a notification in smaller notifcation area. I can see how the current view's viewmodel would call the validation through the viewmodel that is containing the validation output, but I am concerned that the service call will timeout. Also, I think the user may have already changed the values from the original validation, invalidating the feedback. I am sure asynchronous validation is a problem solved many times over, I am looking to glean from your experience in solving this kind of problem. Is this the right approach to the problem, or is there a better way to approach this?

    Read the article

  • Asynchronous update design/interaction patterns

    - by Andy Waite
    These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc. But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together? Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?

    Read the article

  • What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?

    - by mindcrime
    I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more "traditional" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link.

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • At what point is asynchronous reading of disk I/O more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • Futures/Monads vs Events

    - by c69
    So, the question is quite simple: in an application framework, when performance impact can be ignored (10-20 events per second at max), what is more maintainable and flexible to use as a preferred medium for communication between modules - Events or Futures/Promices/Monads ? Its often being said, that Events (pub/sub, mediator) allow loose-coupling and thus - more maintainable app... My experience deny this: once you have more that 20+ events - debugging becomes hard, and so is refactoring - because it is very hard to see: who, when and why uses what. Promices (i'm coding in javascript) are much uglier and dumber, than Events. But: you can clearly see connections between function calls, so application logic becomes more straight-forward. What i'm afraid. though, is that Promices will bring more hard-coupling with them... p.s: the answer does not have to be based on JS, experience from other functional languages is much welcome.

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • What are the benefits of Android way of "saving memory" - explicitly passing Context objects everywhere?

    - by Sarge Borsch
    Turned out, this question is not easy to formulate for me, but let's try. In Android, pretty much any UI object depends on a Context, and has defined lifetime. It also can destroy and recreate UI objects and even whole application process at any time, and so on. This makes coding asynchronous operations correctly not straightforward. (and sometimes very cumbersome) But I never have seen a real explanation, why it's done that way? There are other OSes, including mobile OSes (iOS, for example), that don't do such things. So, what are the wins of Android way (Activities & Contexts)? Does that allow Android applications to use much less RAM, or maybe there are other benefits?

    Read the article

  • How to Avoid a Busy Loop Inside a Function That Returns the Object That's Being Waited For

    - by Carl Smith
    I have a function which has the same interface as Python's input builtin, but it works in a client-server environment. When it's called, the function, which runs in the server, sends a message to the client, asking it to get some input from the user. The user enters some stuff, or dismisses the prompt, and the result is passed back to the server, which passes it to the function. The function then returns the result. The function must work like Python's input [that's the spec], so it must block until it has the result. This is all working, but it uses a busy loop, which, in practice, could easily be spinning for many minutes. Currently, the function tells the client to get the input, passing an id. The client returns the result with the id. The server puts the result in a dictionary, with the id as the key. The function basically waits for that key to exist. def input(): '''simplified example''' key = unique_key() tell_client_to_get_input(key) while key not in dictionary: pass return dictionary.pop(pin) Using a callback would be the normal way to go, but the input function must block until the result is available, so I can't see how that could work. The spec can't change, as Python will be using the new input function for stuff like help and pdb, which provide their own little REPLs. I have a lot of flexibility in terms of how everything works overall, but just can't budge on the function acting exactly like Python's. Is there any way to return the result as soon as it's available, without the busy loop?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >