Search Results

Search found 5262 results on 211 pages for 'operation'.

Page 171/211 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Some clarification needed about synchronous versus asynchronous asio operations

    - by Old newbie
    As far as I know, the main difference between synchronous and asynchronous operations. I.e. write() or read() vs async_write() and async_read() is that the former, don't return until the operation finish -or error-, and the last ones, returns inmediately. Due the fact that the asynchronous operations are controlled by an io_service.run() that does not finish until the controlled operations has finalized. It seems to me that in sequencial operations as those involved in TCP/IP connections with protocols such as POP3, in which the operaton is a sequence such as: C: <connect> S: Ok. C: User... S: Ok. C: Password S: Ok. C: Command S: answer C: Command S: answer ... C: bye S: <close> The difference between synchronous/asynchronous opperatons does not make much sense. Of course, in both operations there is allways the risk that the program flow stops indefinitely by some circunstance -there the use of timers-, but I would like know some more authorized opinions in this matter. I must admit that the question is rather ill-defined, but I like hear some advices about when use one or other, because I've problems in debugging with MS Visual Studio, asynchronous SSL operations in a POP3 client in wich I'm working now -about some of who surely I would write here soon-, and sometimes think that perhaps is a bad idea use asynchronous in this. Not to say that I'm an absolute newbie with this librarys, that additionally to the difficult with the idioma, and some obscure concepts in the STL, must suffer the brevity of the asio documentation.

    Read the article

  • Is there a way to use Linq projections with extension methods

    - by Acoustic
    I'm trying to use AutoMapper and a repository pattern along with a fluent interface, and running into difficulty with the Linq projection. For what it's worth, this code works fine when simply using in-memory objects. When using a database provider, however, it breaks when constructing the query graph. I've tried both SubSonic and Linq to SQL with the same result. Thanks for your ideas. Here's an extension method used in all scenarios - It's the source of the problem since everything works fine without using extension methods public static IQueryable<MyUser> ByName(this IQueryable<MyUser> users, string firstName) { return from u in users where u.FirstName == firstName select u; } Here's the in-memory code that works fine var userlist = new List<User> {new User{FirstName = "Test", LastName = "User"}}; Mapper.CreateMap<User, MyUser>(); var result = (from u in userlist select Mapper.Map<User, MyUser>(u)) .AsQueryable() .ByName("Test"); foreach (var x in result) { Console.WriteLine(x.FirstName); } Here's the same thing using a SubSonic (or Linq to SQL or whatever) that fails. This is what I'd like to make work somehow with extension methods... Mapper.CreateMap<User, MyUser>(); var result = from u in new DataClasses1DataContext().Users select Mapper.Map<User, MyUser>(u); var final = result.ByName("Test"); foreach(var x in final) // Fails here when the query graph built. { Console.WriteLine(x.FirstName); } The goal here is to avoid having to manually map the generated "User" object to the "MyUser" domain object- in other words, I'm trying to find a way to use AutoMapper so I don't have this kind of mapping code everywhere a database read operation is needed: var result = from u in new DataClasses1DataContext().Users select new MyUser // Can this be avoided with AutoMapper AND extension methods? { FirstName = v.FirstName, LastName = v.LastName };

    Read the article

  • How to handle authenticated user access to resources in document oriented system?

    - by Jeremy Raymond
    I'm developing a document oriented application and need to manage user access to the documents. I have a module that handles user authentication, and another module that handles document CRUD operations on the data store. Once a user is authenticated I need to enforce what operations the user can and cannot perform to documents based upon the user's permissions. The best option I could think of to integrate these two pieces together would be to create another module that duplicates the data API but that also takes the authenticated user as a parameter. The module would delegate the authorization check to the auth module and delegate the document operation to the data access module. Something like: -module(auth_data_access). % User is authenticated (logged into the system) % save_doc validates if user is allowed to save the given document and if so % saves it returning ok, else returns {error, permission_denied} save_doc(Doc, User) -> case auth:save_allowed(Doc, User) of ok -> data_access:save_doc(Doc); denied -> {error, permission_denied} end end. Is there a better way I can handle this?

    Read the article

  • Why does System.Net.Mail work in one part of my c#.net web app, but not in another?

    - by Marc
    I have a web application that is running on IIS within my company's domain, and is being accessed via intranet. I have this application sending out email based on some user actions. For example, its a scheduling application in part, so if a task is completed, an email is sent out notifying other users of that. The problem is, the email works flawlessly in some cases, and not at all in others. I have a login.aspx page which sends out report emails when the page is loaded (its loaded once a day via windows task scheduler) - this always seems to work perfectly. I have an update page which is supposed to send email when text is entered and the "Update" button is clicked - this operation will fail most of the time. Both of these tasks use the same static overloaded method I wrote to send email using System.Net.Mail. I have tried using gmail as my smtp server (instead of our internal one), and get the same results. I investigated whether having the local SMTP Service running makes any difference, and it doesn't seem to. Besides, since C# is server-side code, shouldn't it only matter whats running on the server, and not the client? Please help me figure out whats wrong! Where should I look? What can I try?

    Read the article

  • get html content of a page with Silverlight

    - by Yustme
    Hi, I'm trying to get the html content of a page using silverlight. Webresponse and request classes don't work in silverlight. I did some googling and I found something. This is what i tried: public partial class MainPage : UserControl { string result; WebClient client; public MainPage() { InitializeComponent(); this.result = string.Empty; this.client = new WebClient(); this.client.DownloadStringCompleted += ClientDownloadStringCompleted; } private void btn1_Click(object sender, RoutedEventArgs e) { string url = "http://www.nu.nl/feeds/rss/algemeen.rss"; this.client.DownloadStringAsync(new Uri(url, UriKind.Absolute)); if (this.result != string.Empty && this.result != null) { this.txbSummery.Text = this.result; } } private void ClientDownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { this.result = e.Result; //handle the response. } } It gives me a runtime error after pressing the button: Microsoft JScript runtime error: Unhandled Error in Silverlight Application An exception occurred during the operation, making the result invalid. Check InnerException for exception details. at System.ComponentModel.AsyncCompletedEventArgs.RaiseExceptionIfNecessary() at System.Net.DownloadStringCompletedEventArgs.get_Result() at JWTG.MainPage.ClientDownloadStringCompleted(Object sender, DownloadStringCompletedEventArgs e) at System.Net.WebClient.OnDownloadStringCompleted(DownloadStringCompletedEventArgs e) at System.Net.WebClient.DownloadStringOperationCompleted(Object arg) I've tried numerous things but all failed. What am i missing? Or does anyone know how i could achieve this in a different way? Thanks in advance!

    Read the article

  • Representing complex scheduled reoccurance in a database

    - by David Pfeffer
    I have the interesting problem of representing complex schedule data in a database. As a guideline, I need to be able to represent the entirety of what the iCalendar -- ics -- format can represent, but in a database. I'm not actually implementing anything relating to ics, but it gives a good scope of the type of rules I need to be able to model. I need to allow allow representation of a single event or a reoccurring event based on multiple times per day, days of the week, week of a month, month, year, or some combination of those. For example, the third Thursday in November annually, or the 25th of December annually, or every two weeks starting November 2 and continuing until September 8 the following year. I don't care about insertion efficiency but query efficiency is critical. The operation I will be doing most often is providing either a single date/time or a date/time range, and trying to determine if the defined schedule matches any part of the date/time range. Other operations can be slower. For example, given January 15, 2010 at 10:00 AM through January 15, 2010 at 11:00 AM, find all schedules that match at least part of that time. (i.e. a schedule that covers 10:30 - 11:00 still matches.) Any suggestions? I looked at http://stackoverflow.com/questions/1016170/how-would-one-represent-scheduled-events-in-an-rdbms but it doesn't cover the scope of the type of reoccurance rules I'd like to model.

    Read the article

  • Concurrent WCF calls via shared channel

    - by Kent Boogaart
    I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled. But they are not being called concurrently. If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I wanted to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible. I found this article on MSDN that states: While channels and clients created by the channels are thread-safe, they might not support writing more than one message to the wire concurrently. If you are sending large messages, particularly if streaming, the send operation might block waiting for another send to complete. Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages. Can anyone shed some light on this?

    Read the article

  • Truncate C++ string fields generated by ostringstream, iomanip:setw

    - by Ian Durkan
    In C++ I need string representations of integers with leading zeroes, where the representation has 8 digits and no more than 8 digits, truncating digits on the right side if necessary. I thought I could do this using just ostringstream and iomanip.setw(), like this: int num_1 = 3000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_1; cout << "field: " << out_target.str() << " vs input: " << num_1 << endl; The output here is: field: 00003000 vs input: 3000 Very nice! However if I try a bigger number, setw lets the output grow beyond 8 characters: int num_2 = 2000000000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_2; cout << "field: " << out_target.str() << " vs input: " << num_2 << endl; out_target.str(""); output: field: 2000000000 vs input: 2000000000 The desired output is "20000000". There's nothing stopping me from using a second operation to take only the first 8 characters, but is field truncation truly missing from iomanip? Would the Boost formatting do what I need in one step?

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • JSF: Avoid nested update calls

    - by dhroove
    I don't know if I am on right track or not still asking this question. I my JSF project I have this tree structure: <p:tree id="resourcesTree" value="#{someBean.root}" var="node" selectionMode="single" styleClass="no-border" selection="#{someBean.selectedNode}" dynamic="true"> <p:ajax listener="#{someBean.onNodeSelect}" update=":centerPanel :tableForm :tabForm" event="select" onstart="statusDialog.show();" oncomplete="statusDialog.hide();" /> <p:treeNode id="resourcesTreeNode" > <h:outputText value="#{node}" id="lblNode" /> </p:treeNode> </p:tree> I have to update this tree after I added something or delete something.. But whenever I update this still its nested update also call I mean to say it also call to update these components ":centerPanel :tableForm :tabForm"... This give me error that :tableForm not found in view because this forms load in my central panel and this tree is in my right panel.. So when I am doing some operation on tree is it not always that :tableForm is in my central panel.. (I mean design is something like this only) So now my question is that can I put some condition or there is any way so that I can specify when to update nested components also and when not.... In nut shell is there any way to update only :resoucesTree is such a way that nested updates are not called so that I can avoid error... Thanks in advance.

    Read the article

  • Can't send smtp email from network using C#, asp.net website

    - by Kaysar
    Hi, I have my code here, it works fine from my home, where my user is administrator, and I am connected to internet via a cable network. But, problem is when I try this code from my work place, it does not work. Shows error: "unable to connect to the remote server" From a different machine in the same network: "A socket operation was attempted to an unreachable network 209.xxx.xx.52:25" I checked with our network admin, and he assured me that all the mail ports are open [25,110, and other ports for gmail]. Then, I logged in with administrative privilege, there was a little improvement, it did not show any error, but the actual email was never received. Please note that, the code was tested from development environment, visual studio 2005 and 2008. Any suggestion will be much appreciated. Thanks in advance try { MailMessage mail_message = new MailMessage("[email protected]", txtToEmail.Text, txtSubject.Text, txtBody.Text); SmtpClient mail_client = new SmtpClient("SMTP.y7mail.com"); NetworkCredential Authentic = new NetworkCredential("[email protected]", "xxxxx"); mail_client.UseDefaultCredentials = true; mail_client.Credentials = Authentic; mail_message.IsBodyHtml = true; mail_message.Priority = MailPriority.High; try { mail_client.Send(mail_message); lblStatus.Text = "Mail Sent Successfully"; } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); lblStatus.Text = "Mail Sending Failed\r\n" + ex.Message; } } catch (Exception ex) { lblStatus.Text = "Mail Sending Failed\r\n" + ex.Message; }

    Read the article

  • Pushing a local mercurial repository to a remote server or cloning at server from local

    - by Samaursa
    I have a local repository that I have now decided to push to a remote server (for example, I have a host that allows mercurial repositories and I am also trying to push to bitbucket). The repository has a lot of files and is a little more than 200mb. Locally, I am able to clone the repository without problems. Now I have a lot of changes in this repository, and I have wasted a couple of days trying to figure out how to get the remote server to clone my repository. I cannot get hg serve to work outside of the LAN. I have tried everything. So instead, I created a new repository at the remote servers (both at the host and bitbucket) with nothing in it. Now I am pushing the complete repository that I have locally to these remote locations. So far it has been unsuccessful, as the push operation is stuck on searching for changes and does not give me any other useful output. I have let it go for about an hour with no change. Now my questions is, what am I doing wrong as far as hg serve is concerned? I can access it locally but not remotely (through DynDns - I have configured it properly and the router forwards the ports correctly) so that I can get the server to clone the repository the first time after which I will be pushing to it. My second question is, assuming the clone at server does not work (for example, if I was to push my current repository to bitbucket), is creating an empty repository at the server and then pushing a local repository to the new remote repository ok? Is that the source of the searching for changes problem? Any help in this regard would be greatly appreciated.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Delphi - Help calling threaded dll function from another thread

    - by cloudstrif3
    I'm using Delphi 2006 and have a bit of a problem with an application I'm developing. I have a form that creates a thread which calls a function that performs a lengthy operation, lets call it LengthyProcess. Inside the LengthyProcess function we also call several Dll functions which also create threads of their own. The problem that I am having is that if I don't use the Synchronize function of my thread to call LengthyProcess the thread stops responding (the main thread is still responding fine). I don't want to use Synchronize because that means the main thread is waiting for LengthyProcess to finish and therefore defeats the purpose of creating a separate thread. I have tracked the problem down to a function inside the dll that creates a thread and then calls WaitFor, this is all done using TThread by the way. WaitFor checks to see if the CurrentThreadID is equal to the MainThreadID and if it is then it will call CheckSychronization, and all is fine. So if we use Synchronize then the CurrentThreadID will equal the MainThreadID however if we do not use Synchronize then of course CurrentThreadID < MainThreadID, and when this happens WaitFor tells the current thread (the thread I created) to wait for the thread created by the DLL and so CheckSynchronization never gets called and my thread ends up waiting forever for the thread created in the dll. I hope this makes sense, sorry I don't know any better way to explain it. Has anyone else had this issue and knows how to solve it please?

    Read the article

  • Check if file is locked or catch error for trying to open

    - by Duncan Matheson
    I'm trying to handle the problem where a user can try to open, with an OpenFileDialog, a file that is open by Excel. Using the simple FileInfo.OpenRead(), it chucks an IOException, "The process cannot access the file 'cakes.xls' because it is being used by another process." This would be fine to display to the user except that the user will actually get "Debugging resource strings are unavailable" nonsense. It seems not possible to open a file that is open by another process, since using FileInfo.Open(FileMode.Open, FileAccess.Read, FileShare.ReadWrite) chucks a SecurityException, "File operation not permitted. Access to path 'C:\whatever\cakes.xls' is denied.", for any file. Rather unhelpful. So it's down to either finding some way of checking if the file is locked, or trying to catch the IOException. I don't want to catch all IOExceptions and assume they're all locked file errors, so I need some sort of way of classifying this type of exception as this error... but the "Debugging resource strings" nonsense along with the fact that that message itself is probably localized makes it tricky. I'm partial trust, so I can't use Marshal.GetHRForException. So: Is there any sensible way of either check if a file is locked, or at least determining if this problem occurred without just catching all IOExceptions?

    Read the article

  • Problem saving file on Motorola Droid, Android 2.1?

    - by Rob Kent
    Two of my users have reported a problem with my Android application, OftSeen Gestures. Both of them are using a Motorola Droid. The app saves a text file which is just a list of gesture names and phone numbers, both strings. It saves the file to the private data area. I don't know that it is this code that is failing but they report the assigned numbers disappearing after the phone comes out of screen sleep. Since the file is reread in OnCreate each time, I'm assuming the file doesn't exist on return. As soon as I can get my hands on a Droid I will debug it but in the meantime can you see a reason why this save operation would fail on Droid (no other users have reported this)? OutputStreamWriter out = new OutputStreamWriter(AppGlobal.getContext().openFileOutput(MAPPINGS_FILE_NAME, 0)); for (String key : mMap.keySet()) { String number = mMap.get(key).number; out.write(String.format("%s,%s\n", key, number == null ? "" : number)); } out.close(); AppGlobal.getContext returns the application context and the MAPPINGS_FILE_NAME resolves to "gesture_mappings.txt". Like I say, I don't know that this is the problem. It could be something else to do with state management inside the app. If anyone has a Droid, maybe they could download the app from Market and test it for me? Note this is a genuine request for help - not an attempt to increase my downloads.

    Read the article

  • Should I return an NSMutableString in a method that returns NSString

    - by Casey Marshall
    Ok, so I have a method that takes an NSString as input, does an operation on the contents of this string, and returns the processed string. So the declaration is: - (NSString *) processString: (NSString *) str; The question: should I just return the NSMutableString instance that I used as my "work" buffer, or should I create a new NSString around the mutable one, and return that? So should I do this: - (NSString *) processString: (NSString *) str { NSMutableString *work = [NSMutableString stringWithString: str]; // process 'work' return work; } Or this: - (NSString *) processString: (NSString *) str { NSMutableString *work = [NSMutableString stringWithString: str]; // process 'work' return [NSString stringWithString: work]; // or [work stringValue]? } The second one makes another copy of the string I'm returning, unless NSString does smart things like copy-on-modify. But the first one is returning something the caller could, in theory, go and modify later. I don't care if they do that, since the string is theirs. But are there valid reasons for preferring the latter form over the former? And, is either stringWithString or stringValue preferred over the other?

    Read the article

  • JSF - Creating an overlay for popup panels.

    - by Ben
    Hi, I've created an overlay that will popup whenever someone wants to upload a file to the system. The Gui looks like this (when the overlay is up) I have two problems with this: I attached a a4j:support object that, onclick, makes the overlay disappear. The problem with this is that when I click the upload button on the upload component, support catches the click event and closes the overlay with the upload component before I have the chance to finish the operation. I chose two different style classes. One for the overlay and one for the upload panel. But the styling of the overlay takes over the upload component and it becomes transparent as well. The implementation looks something like this: <h:panelgroup layout="block" styleClass="overlayClass"> <rich:fileUpload styleClass="uploadStyleClass"... /> <a4j:support event="onclick" action="#{mrBean.switchOverlayState}" reRender="..."/> </h:panelGroup> The CSS: .overlayClass { Opacity: 0.5; position: fixed; left: 0; right: 0; top: 0; bottom: 0; background: #000; } .uploadStyleClass { opacity: 1.0; ... } Thanks for the help!

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • jquery: How to completely remove a dialog on close

    - by Svish
    When an ajax operation fails, I create a new div with the errors and then show it as a dialog. When the dialog is closed I would like to completely destroy and remove the div again. How can I do this? My code looks something like this at the moment: $('<div>We failed</div>') .dialog( { title: 'Error', close: function(event, ui) { $(this).destroy().remove(); } }); When I run this the dialog box shows up correctly, but when I close it the dialog is still visible in the html (using FireBug). What am I missing here? Something I have forgotten? Update: Just noticed my code gives me an error in the firebug console. $(this).destroy is not a function Anyone able to help me out? Update: If I do just $(this).remove() instead, the item is removed from the html. But is it completely removed from the DOM? Or do I somehow need to call that destroy function first as well?

    Read the article

  • Ajaxcontrol toolkit ConfirmButtonExtender with radiobuttonlist control

    - by chugh97
    I have a Yes/No Radiobutton List, When the asp.net page is loaded, if there are items in the gridview this radio is defaulted to Yes. Now if the user clicks no, I have to delete all the items for the gridview and persist them in db. But I want to show a confimation whether the user wants to go ahead doing this operation. If user clciks Yes then go ahead and delete the rows in gridview if no then keep the original radio setting to Yes. I have been struggling to get this to work as the ConfirmButtonExtender which opens up a modalpopup extender even before I know what has been clicked on the radio. If the radio is preselected with Yes, then if I clcik no, the Modal extender is shown and in the Page PreRender event handler the value of the radio is still Yes and not No as extender is run on click using ajax and it doesnt know the correct value of the radio.Even if I use onClick client side javascript to find out which option has been chosen the difficulty is then executing the server side delete command to the db. Has anyone enconuntered this problem before? Any solutions would be appreciated.

    Read the article

  • Need an explanation on this code - c#

    - by ltech
    I am getting familiar with C# day by day and I came across this piece of code public static void CopyStreamToStream( Stream source, Stream destination, Action<Stream,Stream,Exception> completed) { byte[] buffer = new byte[0x1000]; AsyncOperation asyncOp = AsyncOperationManager.CreateOperation(null); Action<Exception> done = e => { if (completed != null) asyncOp.Post(delegate { completed(source, destination, e); }, null); }; AsyncCallback rc = null; rc = readResult => { try { int read = source.EndRead(readResult); if (read > 0) { destination.BeginWrite(buffer, 0, read, writeResult => { try { destination.EndWrite(writeResult); source.BeginRead( buffer, 0, buffer.Length, rc, null); } catch (Exception exc) { done(exc); } }, null); } else done(null); } catch (Exception exc) { done(exc); } }; source.BeginRead(buffer, 0, buffer.Length, rc, null); } From this article Article What I fail to follow is that how does the delegate get notified that the copy is done? Say after the copy is done I want to perform an operation on the copied file.

    Read the article

  • KeyStore, HttpClient, and HTTPS: Can someone explain this code to me?

    - by stormin986
    I'm trying to understand what's going on in this code. KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); FileInputStream instream = new FileInputStream(new File("my.keystore")); try { trustStore.load(instream, "nopassword".toCharArray()); } finally { instream.close(); } SSLSocketFactory socketFactory = new SSLSocketFactory(trustStore); Scheme sch = new Scheme("https", socketFactory, 443); httpclient.getConnectionManager().getSchemeRegistry().register(sch); My Questions: trustStore.load(instream, "nopassword".toCharArray()); is doing what exactly? From reading the documentation load() will load KeyStore data from an input stream (which is just an empty file we just created), using some arbitrary "nopassword". Why not just load it with null as the InputStream parameter and an empty string as the password field? And then what is happening when this empty KeyStore is being passed to the SSLSocketFactory constructor? What's the result of such an operation? Or -- is this simply an example where in a real application you would have to actually put a reference to an existing keystore file / password?

    Read the article

  • Find min. "join" operations for sequence

    - by utyle
    Let's say, we have a list/an array of positive integers x1, x2, ... , xn. We can do a join operation on this sequence, that means that we can replace two elements that are next to each other with one element, which is sum of these elements. For example: - array/list: [1;2;3;4;5;6] we can join 2 and 3, and replace them with 5; we can join 5 and 6, and replace them with 11; we cannot join 2 and 4; we cannot join 1 and 3 etc. Main problem is to find minimum join operations for given sequence, after which this sequence will be sorted in increasing order. Note: empty and one-element sequences are sorted in increasing order. Basic examples: for [4; 6; 5; 3; 9] solution is 1 (we join 5 and 3) for [1; 3; 6; 5] solution is also 1 (we join 6 and 5) What I am looking for, is an algorithm that solve this problem. It could be in pseudocode, C, C++, PHP, OCaml or similar (I mean: I woluld understand solution, if You wrote solution in one of these languages). I would appreciate Your help.

    Read the article

  • How do I rewrite a for loop with a shared dependency using actors

    - by Thomas Rynne
    We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar. I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true? However, I don't know how to think in actors so I am not sure where to start. To give a better idea of the problem here is some sample code: case class Trade(price:Double, volume:Int, stock:String) { def value(priceCalculator:PriceCalculator) = (priceCalculator.priceFor(stock)-> price)*volume } class PriceCalculator { def priceFor(stock:String) = { Thread.sleep(20)//a slow operation which can be cached 50.0 } } object ValueTrades { def valueAll(trades:List[Trade], priceCalculator:PriceCalculator):List[(Trade,Double)] = { trades.map { trade => (trade,trade.value(priceCalculator)) } } def main(args:Array[String]) { val trades = List( Trade(30.5, 10, "Foo"), Trade(30.5, 20, "Foo") //usually much longer ) val priceCalculator = new PriceCalculator val values = valueAll(trades, priceCalculator) } } I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >