Search Results

Search found 21220 results on 849 pages for 'oracle events'.

Page 821/849 | < Previous Page | 817 818 819 820 821 822 823 824 825 826 827 828  | Next Page >

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • How to Add an Attachment to a User Story using Rally REST .NET

    - by user1373451
    We're in the process of porting our .NET Rally code from SOAP to the REST .NET API. One thing I'm looking to replicate is the ability to upload attachments. I'm following a very similar procedure as to the one outlined in this posting: Rally SOAP API - How do I add an attachment to a Hierarchical Requirement Whereby the image is read into a System.Drawing.Image. We use the ImageToBase64 function to convert the image to a byte array which then gets assigned to the AttachmentContent, which is created first. Then, the Attachment gets created, and wired up to both AttachmentContent, and the HierarchicalRequirement. All of the creation events work great. A new attachment called "Image.png" gets created on the Story. However, when I download the resulting attachment from Rally, Image.png has zero bytes! I've tried this with different images, JPEG's, PNG's, etc. all with the same results. An excerpt of the code showing the process is included below. Is there something obvious that I'm missing? Thanks in advance. // .... Read content into a System.Drawing.Image.... // Convert Image to Base64 format byte[] imageBase64Format = ImageToBase64(imageObject, System.Drawing.Imaging.ImageFormat.Png); var imageLength = imageBase64Format.Length; // AttachmentContent DynamicJsonObject attachmentContent = new DynamicJsonObject(); attachmentContent["Content"] = imageBase64Format; CreateResult cr = restApi.Create("AttachmentContent", myAttachmentContent); String contentRef = cr.Reference; Console.WriteLine("Created: " + contentRef); // Tee up attachment DynamicJsonObject newAttachment = new DynamicJsonObject(); newAttachment["Artifact"] = story; newAttachment["Content"] = attachmentContent; newAttachment["Name"] = "Image.png"; newAttachment["ContentType"] = "image/png"; newAttachment["Size"] = imageLength; newAttachment["User"] = user; cr = restApi.Create("Attachment", newAttachment); String attachRef = attachRef.Reference; Console.WriteLine("Created: " + attachRef); } public static byte[] ImageToBase64(Image image, System.Drawing.Imaging.ImageFormat format) { using (MemoryStream ms = new MemoryStream()) { image.Save(ms, format); // Convert Image to byte[] byte[] imageBytes = ms.ToArray(); return imageBytes; } }

    Read the article

  • ObjectDisposedException from core .NET code

    - by John
    I'm having this issue with a live app. (Unfortunately this is post-mortem debugging - I only have this stack trace. I've never seen this personally, nor am I able to reproduce). I get this Exception: message=Cannot access a disposed object. Object name: 'Button'. exceptionMessage=Cannot access a disposed object. Object name: 'Button'. exceptionDetails=System.ObjectDisposedException: Cannot access a disposed object. Object name: 'Button'. at System.Windows.Forms.Control.CreateHandle() at System.Windows.Forms.Control.get_Handle() at System.Windows.Forms.Control.PointToScreen(Point p) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) exceptionSource=System.Windows.Forms exceptionTargetSite=Void CreateHandle() It looks like a mouse event is arriving at a form after the form has been disposed. Note there is none of my code in this stack trace. The only weird (?) thing I'm doing, is that I do tend to Dispose() Forms quite aggressively when I use them with ShowModal() (see "Aside" below). But I only do this after ShowModal() has returned (that should be safe right)? I think I read that events might be queued up in the event queue, but I can't believe this would be the problem. I mean surely the framework must be tolerant to old messages? I can well imagine that under stress messages might back-log and surely the window might go away at any time? Any ideas? If you could even suggest ways of reproducing, that might be useful. John Aside: TBH I've never quite understood whether calling Dispose() after Form.ShowDialog() is strictly necessary - the MSDN docs for ShowDialog() are to my mind a bit ambiguous.

    Read the article

  • Loading external pngs into an AS2 swf that is loaded into an AS3 swf wrapper

    - by James Fassett
    I have a Wrapper SWF that loads a series of AS2 movies. Each AS2 movie loads a series of .png files. AS3_wrapper.swf |-> AS2_1.swf |-> image_1.png |-> image_2.png |-> AS2_2.swf |-> image_1.png |-> image_2.png Inside of the AS2 I listen for the load of the pngs using onLoadInit and update my UI. This works fine for the first AS2 swf. But when I load the second AS2 swf the onLoadInit isn't triggered for the pngs. My guess is that the images are in a cache or something like that. I put a random string on the end of the request to try and avoid the cache but that doesn't seem to work. The code in the as2 looks roughly like this: var flagLoader:MovieClipLoader = new MovieClipLoader(); var listener:Object = new Object(); listener.onLoadInit = Delegate.create(this, handleImageLoad); flagLoader.addListener(listener); var row:MovieClip = frame1["row" + (numLoaded + 1)]; flagLoader.loadClip(predictionData[numLoaded].flag + "?r="+Math.random(), row.flag); I'm making sure to load only one image at a time (I've read anecdotal evidence loading more than one thing at a time can confuse the MovieClipLoader). For the first as2 file everything works great. When I load the second as2 file the handleImageLoad never gets called. Update: Even more perplexing is if I reload the first AS2 movie (after the second AS2 movie fails to load the images) the first AS2 movie loads the images again fine. Update 2: After trying to change from using a MovieClipLoader to polling (as was helpfully suggested) I have found some more evidence that is relevant. When I load the first AS2 files and trace from the top level clip it prints out _root. The second AS2 file when loaded traces the same _root. This lead me to check if they were clashing on names and they are. Both have a child called frame. The first one, when I trace it comes out as _root.frame as expected. The second AS2 file traces _level0.instance3.instance118.instance111.frame. I'm guessing this is related to the problem. Flash is keeping the _root of the two files the same but it is changing the locations of their children (for subsequently loaded files that have children with the same names). So either the onLoad is going to the wrong clip or the events about it loading are.

    Read the article

  • Build path issues learning Guice

    - by Preston
    I can't figure out why I'm getting this error below I have included all the appropriate jars as far as I can tell(I have included eclipses .classpath file below.) All of the classpath entries resolve just fine. What am I missing? The type javax.servlet.ServletContextListener cannot be resolved. It is indirectly referenced from required .class files on the "extends GuiceServletContextListener" line - import com.google.inject.Guice; import com.google.inject.Injector; import com.google.inject.servlet.GuiceServletContextListener; import com.google.inject.servlet.ServletModule; public class ServletConfig extends GuiceServletContextListener { @Override protected Injector getInjector() { return Guice.createInjector(new ServletModule(){ @Override protected void configureServlets() { // TODO: add necessary code to bind } }); } } .Classpath <?xml version="1.0" encoding="UTF-8"?> <classpath> <classpathentry kind="src" path="src"/> <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.7.0_21"> <attributes> <attribute name="owner.project.facets" value="java"/> </attributes> </classpathentry> <classpathentry kind="con" path="oracle.eclipse.tools.glassfish.lib.system"> <attributes> <attribute name="owner.project.facets" value="jst.web"/> </attributes> </classpathentry> <classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.web.container"/> <classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.module.container"/> <classpathentry kind="lib" path="guice-3.0/aopalliance.jar"/> <classpathentry kind="lib" path="guice-3.0/guice-3.0.jar"/> <classpathentry kind="lib" path="guice-3.0/guice-servlet-3.0.jar"/> <classpathentry kind="lib" path="guice-3.0/javax.inject.jar"/> <classpathentry kind="output" path="build/classes"/> </classpath>

    Read the article

  • Segmentation fault in std function std::_Rb_tree_rebalance_for_erase ()

    - by Sarah
    I'm somewhat new to programming and am unsure how to deal with a segmentation fault that appears to be coming from a std function. I hope I'm doing something stupid (i.e., misusing a container), because I have no idea how to fix it. The precise error is Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000000000000c 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () (gdb) backtrace #0 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () #1 0x000000010000e593 in Simulation::runEpidSim (this=0x7fff5fbfcb20) at stl_tree.h:1263 #2 0x0000000100016078 in main () at main.cpp:43 The function that exits successfully just before the segmentation fault updates the contents of two containers. One is a boost::unordered_multimap called carriage; it contains one or more struct Infection objects that contain two doubles. The other container is of type std::multiset< Event, std::less< Event EventPQ called ce. It is full of Event structs. void Host::recover( int s, double recoverTime, EventPQ & ce ) { // Clearing all serotypes in carriage // and their associated recovery events in ce // and then updating susceptibility to each serotype double oldRecTime; int z; for ( InfectionMap::iterator itr = carriage.begin(); itr != carriage.end(); itr++ ) { z = itr->first; oldRecTime = (itr->second).recT; EventPQ::iterator epqItr = ce.find( Event(oldRecTime) ); assert( epqItr != ce.end() ); ce.erase( epqItr ); immune[ z ]++; } carriage.clear(); calcSusc(); // a function that edits an array cout << "Done with sync_recovery event." << endl; } The last cout << line appears immediately before the seg fault. I hope this is enough (but not too much) information. My idea so far is that the rebalancing is being attempting on ce after this function, but I am unsure why it would be failing. (It's unfortunately very hard for me to test this code by removing particular lines, since they would create logical inconsistencies and further problems, but if experienced programmers still think this is the way to go, I'll try.)

    Read the article

  • ASP.NET Dynamically filtering data

    - by Jasper
    For a project I'm working on, we're looking for a way to dynamically add filters to a page which then control the dataoutput in, for instance, a grid. We want to add the filters dynamically because we want the customer to be able to change which properties can be filtered and what filtertype (textbox, dropdown, colourpicker, etc.) should be used. The filter should work as follows: - The customer links a filter to a certain property and specifies the filtertype (for this example: dropdown). - A user control which contains all the filter loads all filters specified - The filters load all values of the specified property as options. The first time the page loads; this would be the values of all items. - Now the user selects a value from one of the filters; the page reloads - Only items which have the specified filter value are retrieved, the user may specify one or more filters at the same time. - Once a user drills down by filtering, only filtervalues of the retrieved items should be used in the other filters. I have the following problems: - When I create the filters runtime, events are lost because the controls get recreated each postback. - I could place the filters in PreInit which should solve this, but then determining which controls should be loaded becomes a problem since loading all environment vars isn't finished yet - I don't know a good way of returning all the filter values to a central point from which I can make a good query. - The query has to be dynamic. I'm using linq which I want to make dynamic so I don't have to select everything everytime. How to make a dynamic select query based on a string stored in the database? - I have to select items based on the filtervalues and then adjust the rest of the filters to the already made selection. That kind of messes up the whole regular databinding sequence. Any help in one of the above would be great! PS: One thing I thought about was passing along filter values in the postback which would have to be recognizable. That way the server could use them for selection and then create the filters and autoselect the previously selected filtervalues. I'm not quite sure how to acheive this though...

    Read the article

  • Running same powershell script multiple asynchronous times with separate runspace

    - by teqnomad
    Hi, I have a powershell script which is called by a batch script which is called by Trap Receiver (which also passes environment variables) (running on windows 2008). The traps are flushed out at times in sets of 2-4 trap events, and the batch script will echo the trap details for each message to a logfile, but the powershell script on the next line of the batch script will only appear to process the first trap message (the powershell script writes to the same logfile). My interpetation is that the defaultrunspace is common to all iterations of the script running and this is why the others appear to be ignored. I've tried adding "-sta" when I invoke the powershell script using "powershell.exe -command", but this didn't help. I've researched and found a method using C# but I don't know this language, and busy enough learning powershell, so hoping to find a more direct solution especially as interleaving a "wrapper" between batch and powershell will involve passing the environment variables. http://www.codeproject.com/KB/threads/AsyncPowerShell.aspx I've hunted through stackoverflow, and again the only question of similar vein was using C#. Any suggestions welcome. Some script background: The powershell script is actually a modification of a great script found at gregorystrike website - cant post the link as I'm limited to one link but its the one for Lefthand arrays. Lots of mods so it can do multiple targets from one .ini file, taking in the environment variables, and options to run portions of the script interactively with winform. But you can see the gist of the original script. The batch script is pretty basic. The keys things are I'm trying to filter out trap noise using the :~ operator, and I tried -sta option to see if this would compartmentalise the powershell script. set debug=off set CMD_LINE_ARGS="%*" set LHIPAddress="%2" set VARBIND8="%8" shift shift shift shift shift shift shift set CHASSIS="%9" echo %DATE% %TIME% "Trap Received: %LHIPAddress% %CHASSIS% %VARBIND8%" >> C:\Logs\trap_out.txt set ACTION="%VARBIND8:~39,18%" echo %DATE% %TIME% "Action substring is %ACTION%" 2>&1 >> C:\Logs\trap_out.txt if %ACTION%=="Remote Copy Volume" ( echo Prepostlefthand_env_v2.9 >> C:\Logs\trap_out.txt c:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe -sta -executionpolicy unrestricted -command " & 'C:\Scripts\prepostlefthand_env_v2.9.ps1' Backupsettings.ini ALL" 2>&1 >> C:\Logs\trap_out.txt ) ELSE ( echo %DATE% %TIME% Action substring is %ACTION% so exiting" 2>&1 >> C:\Logs\trap.out.txt ) exit

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Structuring Win32 GUI code

    - by kraf
    I wish to improve my code and file structure in larger Win32 projects with plenty of windows and controls. Currently, I tend to have one header and one source file for the entire implementation of a window or dialog. This works fine for small projects, but now it has come to the point where these implementations are starting to reach 1000-2000 lines, which is tedious to browse. A typical source file of mine looks like this: static LRESULT CALLBACK on_create(const HWND hwnd, WPARAM wp, LPARAM lp) { setup_menu(hwnd); setup_list(hwnd); setup_context_menu(hwnd); /* clip */ return 0; } static LRESULT CALLBACK on_notify(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) { const NMHDR* header = (const NMHDR*)lp; /* At this point I feel that the control's event handlers doesn't * necessarily belong in the same source file. Perhaps I could move * each control's creation code and event handlers into a separate * source file? Good practice or cause of confusion? */ switch (header->idFrom) { case IDC_WINDOW_LIST: switch (header->code) { case NM_RCLICK: return on_window_list_right_click(hwnd, wp, lp); /* clip */ } } } static LRESULT CALLBACK wndmain_proc(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) { switch (msg) { case WM_CREATE: return on_create(hwnd, wp, lp); case WM_CLOSE: return on_close(hwnd, wp, lp); case WM_NOTIFY: return on_notify(hwnd, wp, lp); /* It doesn't matter much how the window proc looks as it just forwards * events to the appropriate handler. */ /* clip */ default: return DefWindowProc(hwnd, msg, wp, lp); } } But now as the window has a lot more controls, and these controls in turn have their own message handlers, and then there's the menu click handlers, and so on... I'm getting lost, and I really need advice on how to structure this mess up in a good and sensible way. I have tried to find good open source examples of structuring Win32 code, but I just get more confused since there are hundreds of files, and within each of these files that seem GUI related, the Win32 GUI code seems so far encapsulated away. And when I finally find a CreateWindowEx statement, the window proc is nowhere to be found. Any advice on how to structure all the code while remaining sane would be greatly appreciated. Thanks! I don't wish to use any libraries or frameworks as I find the Win32 API interesting and valuable for learning. Any insight into how you structure your own GUI code could perhaps serve as inspiration.

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • Help with infrequent segmentation fault in accessing struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem.

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • "Emulating" Application.Run using Application.DoEvents

    - by Luca
    I'm getting in trouble. I'm trying to emulate the call Application.Run using Application.DoEvents... this sounds bad, and then I accept also alternative solutions to my question... I have to handle a message pump like Application.Run does, but I need to execute code before and after the message handling. Here is the main significant snippet of code. // Create barrier (multiple kernels synchronization) sKernelBarrier = new KernelBarrier(sKernels.Count); foreach (RenderKernel k in sKernels) { // Create rendering contexts (one for each kernel) k.CreateRenderContext(); // Start render kernel kernels k.mThread = new Thread(RenderKernelMain); k.mThread.Start(k); } while (sKernelBarrier.KernelCount > 0) { // Wait untill all kernel loops has finished sKernelBarrier.WaitKernelBarrier(); // Do application events Application.DoEvents(); // Execute shared context services foreach (RenderKernelContextService s in sContextServices) s.Execute(sSharedContext); // Next kernel render loop sKernelBarrier.ReleaseKernelBarrier(); } This snippet of code is execute by the Main routine. Pratically I have a list of Kernel classes, which runs in separate threads, these threads handle a Form for rendering in OpenGL. I need to synchronize all the Kernel threads using a barrier, and this work perfectly. Of course, I need to handle Form messages in the main thread (Main routine), for every Form created, and indeed I call Application.DoEvents() to do the job. Now I have to modify the snippet above to have a common Form (simple dialog box) without consuming the 100% of CPU calling Application.DoEvents(), as Application.Run does. The goal should be to have the snippet above handle messages when arrives, and issue a rendering (releasing the barrier) only when necessary, without trying to get the maximum FPS; there should be the possibility to switch to a strict loop to render as much as possible. How could it be possible? Note: the snippet above must be executed in the Main routine, since the OpenGL context is created on the main thread. Moving the snippet in a separated thread and calling Application.Run is quite unstable and buggy...

    Read the article

  • Unique element ID, even if element doesn't have one

    - by Robert J. Walker
    I'm writing a GreaseMonkey script where I'm iterating through a bunch of elements. For each element, I need a string ID that I can use to reference that element later. The element itself doesn't have an id attribute, and I can't modify the original document to give it one (although I can make DOM changes in my script). I can't store the references in my script because when I need them, the GreaseMonkey script itself will have gone out of scope. Is there some way to get at an "internal" ID that the browser uses, for example? A Firefox-only solution is fine; a cross-browser solution that could be applied in other scenarios would be awesome. Edit: If the GreaseMonkey script is out of scope, how are you referencing the elements later? They GreaseMonkey script is adding events to DOM objects. I can't store the references in an array or some other similar mechanism because when the event fires, the array will be gone because the GreaseMonkey script will have gone out of scope. So the event needs some way to know about the element reference that the script had when the event was attached. And the element in question is not the one to which it is attached. Can't you just use a custom property on the element? Yes, but the problem is on the lookup. I'd have to resort to iterating through all the elements looking for the one that has that custom property set to the desired id. That would work, sure, but in large documents it could be very time consuming. I'm looking for something where the browser can do the lookup grunt work. Wait, can you or can you not modify the document? I can't modify the source document, but I can make DOM changes in the script. I'll clarify in the question. Can you not use closures? Closuses did turn out to work, although I initially thought they wouldn't. See my later post. It sounds like the answer to the question: "Is there some internal browser ID I could use?" is "No."

    Read the article

  • Cocoa NSOutputStream send to a connection

    - by Chuck
    Hi, I am new to Cocoa, but managed to get a connection (to a FTP) up and running, and I've set up an eventhandler for the NSInputStream iStream to alert every response (which also works). What I manage to get is simply the hello message and a connection timeout 60 sec, closing control connection. After searching stackoverflow and finding a lot of NSOutputStream write problems (e.g. http://stackoverflow.com/questions/703729/how-to-use-nsoutputstreams-write-message) and a lot of confusion in my google hits, I figured I'd try to ask my own question: I've tried reading the developer.apple.com doc on OutputStream, but it seems almost impossible for me to send some data (in this case just a string) to the "connection" via the NSOutputStream oStream. - (IBAction) send_something: sender { const char *send_command_char = [@"USER foo" UTF8String]; send_command_buffer = [NSMutableData dataWithBytes:send_command_char length:strlen(send_command_char) + 1]; uint8_t *readBytes = (uint8_t *)[send_command_buffer mutableBytes]; NSInteger byteIndex = 0; readBytes += byteIndex; int data_len = [send_command_buffer length]; unsigned int len = ((data_len - byteIndex >= 1024) ? 1024 : (data_len-byteIndex)); uint8_t buf[len]; (void)memcpy(buf, readBytes, len); len = [oStream write:(const uint8_t *)buf maxLength:len]; byteIndex += len; } the above seems not to result in any useable events. typing it under NSStreamEventHasSpaceAvailable sometimes give a response if I spam the ftp by keep creating new connection instances and keep sending some command whenever oStream has free space. In other words, nothing "right" and so I'm still unclear how to properly send a command to the connection. Should I open - write - close every time i want to write to oStream (and thus to the ftp) and can I then expect a reply (hasBytesAvailable event on iStream)? For some reason I find it very difficult to find any clear tutorials on this matter. Seems like there are more than a few in the same position as me: unclear how to use oStream write? Any little bit that can help clear this up is greatly appreciated! If needed I can write the rest of the code. Chuck

    Read the article

  • Cancel outlook meeting requests via MailMessage in C#

    - by BTmuney
    I'm creating an application using the ASP.NET MVC 1 framework in C#, where I have users that register for events. Upon registering, I create an outlook meeting request public string BuildMeetingRequest(DateTime start, DateTime end, string attendees, string organizer, string subject, string description, string UID, string location) { System.Text.StringBuilder sw = new System.Text.StringBuilder(); sw.AppendLine("BEGIN:VCALENDAR"); sw.AppendLine("VERSION:2.0"); sw.AppendLine("METHOD:REQUEST"); sw.AppendLine("BEGIN:VEVENT"); sw.AppendLine(attendees); sw.AppendLine("CLASS:PUBLIC"); sw.AppendLine(string.Format("CREATED:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)); sw.AppendLine("DESCRIPTION:" + description); sw.AppendLine(string.Format("DTEND:{0:yyyyMMddTHHmmssZ}", end)); sw.AppendLine(string.Format("DTSTAMP:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)); sw.AppendLine(string.Format("DTSTART:{0:yyyyMMddTHHmmssZ}", start)); sw.AppendLine("ORGANIZER;CN=\"NAME\":mailto:" + organizer); sw.AppendLine("SEQUENCE:0"); sw.AppendLine("UID:" + UID); sw.AppendLine("LOCATION:" + location); sw.AppendLine("SUMMARY;LANGUAGE=en-us:" + subject); sw.AppendLine("BEGIN:VALARM"); sw.AppendLine("TRIGGER:-PT720M"); sw.AppendLine("ACTION:DISPLAY"); sw.AppendLine("DESCRIPTION:Reminder"); sw.AppendLine("END:VALARM"); sw.AppendLine("END:VEVENT"); sw.AppendLine("END:VCALENDAR"); return sw.ToString(); } And once built, I use MailMessage, with an alternate view to send out the meeting request: meetingInfo = BuildMeetingRequest(start, end, attendees, organizer, subject, description, UID, location); System.Net.Mime.ContentType mimeType = new System.Net.Mime.ContentType("text/calendar; method=REQUEST"); AlternateView ICSview = AlternateView.CreateAlternateViewFromString(meetingInfo,mimeType); MailMessage message = new MailMessage(); message.To.Add(to); message.From = new MailAddress(from); message.AlternateViews.Add(ICSview); SmtpClient client = new SmtpClient(); client.Send(message); When users get the email in outlook, it shows up as a meeting request, as opposed to a normal email. This works well for sending out updates to the meeting request as well. The only problem that I am having is that I do not know the proper format for sending out a cancellation. I've attempted to examine some meeting request cancellations in text editors and can't seem to pinpoint the difference in the format between cancelling/creating. Any help on this is greatly appreciated.

    Read the article

  • How do you implement position-sensitive zooming inside a JScrollPane?

    - by tucuxi
    I am trying to implement position-sensitive zooming inside a JScrollPane. The JScrollPane contains a component with a customized 'paint' that will draw itself inside whatever space it is allocated - so zooming is as easy as using a MouseWheelListener that resizes the inner component as required. But I also want zooming into (or out of) a point to keep that point as central as possible within the resulting zoomed-in (or -out) view (this is what I refer to as 'position-sensitive' zooming), similar to how zooming works in google maps. I am sure this has been done many times before - does anybody know the "right" way to do it under Java Swing?. Would it be better to play with Graphic2D's transformations instead of using JScrollPanes? Sample code follows: package test; import java.awt.*; import java.awt.event.*; import java.awt.geom.*; import javax.swing.*; public class FPanel extends javax.swing.JPanel { private Dimension preferredSize = new Dimension(400, 400); private Rectangle2D[] rects = new Rectangle2D[50]; public static void main(String[] args) { JFrame jf = new JFrame("test"); jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jf.setSize(400, 400); jf.add(new JScrollPane(new FPanel())); jf.setVisible(true); } public FPanel() { // generate rectangles with pseudo-random coords for (int i=0; i<rects.length; i++) { rects[i] = new Rectangle2D.Double( Math.random()*.8, Math.random()*.8, Math.random()*.2, Math.random()*.2); } // mouse listener to detect scrollwheel events addMouseWheelListener(new MouseWheelListener() { public void mouseWheelMoved(MouseWheelEvent e) { updatePreferredSize(e.getWheelRotation(), e.getPoint()); } }); } private void updatePreferredSize(int n, Point p) { double d = (double) n * 1.08; d = (n > 0) ? 1 / d : -d; int w = (int) (getWidth() * d); int h = (int) (getHeight() * d); preferredSize.setSize(w, h); getParent().doLayout(); // Question: how do I keep 'p' centered in the resulting view? } public Dimension getPreferredSize() { return preferredSize; } private Rectangle2D r = new Rectangle2D.Float(); public void paint(Graphics g) { super.paint(g); g.setColor(Color.red); int w = getWidth(); int h = getHeight(); for (Rectangle2D rect : rects) { r.setRect(rect.getX() * w, rect.getY() * h, rect.getWidth() * w, rect.getHeight() * h); ((Graphics2D)g).draw(r); } } }

    Read the article

  • WPF Application Slow Unresponsive when demonstrating using remote sharing software

    - by Kev
    After spending 14 hours on this I think its time to share my woes and see if anyone has experienced this issue before. Ill describe the issue and tests I have done to rule out certain things. Ok so I have a WPF application which loads in data from an SQL database. I am using DevExpress Components for datagrids, ribbons etc.. and FluentNhibernate to provide a session for database operations. I am also using log4net to log events to a textfile. Using the application on my laptop with SQL Express 2008 works fine.. the application starts up, retrieves 1000 records and I can tab through the controls on the ribbon. Now, I decided to demo the application to a third party and used remote login/sharing software online to share my desktop with the other person so as I could load the application on my laptop and they could view me using the application. Now, the application takes approx 45 seconds to load... 30 seconds with a blank database where as, when im not sharing out my screen using the online software the application loads in about 7-10 seconds. As well as that, even using the controls in the application during the demo were very sticky, slow and unresponsive. During the sharing session though however I was able to use other applications without any problems.. everything else worked fine. But I cannot understand how my application works ok under normal conditions , even browsing the net at the same time etc... BUT totally fails to perform correctly when I am sharing a session with another user... the CPU usage shot up to 100% too at times when the application was trying to start up... Please see below a list of 3rd party dlls I am using as references in my project. DevExpress dlls FluidKit PixelLab.WPF PixelLab.Common Galasoft WPF Kit FluentNHibernate NHibernate Nhibernate.ByteCode.Castle Skype4ComLib TXTEXTControl log4net LinqKit All of these DLLs are in the output folder with the application dlls created from the class assemblys in the project. So when installed via an installer on a machine the dlls will be in the same application folder as the application file itself. Many thanks

    Read the article

  • Javascript code inside updatepanel usercontrol.

    - by Ed Woodcock
    Ok: I've got an updatepanel on an aspx page that contains a single Placeholder. Inside this placeholder I'm appending one of a selection of usercontrols depending on certain external conditions (this is a configuration page). In each of these usercontrols there is a bindUcEvents() javascript function that binds the various jQuery and javascript events to buttons and validators inside the usercontrol. The issue I'm having is that the usercontrol's javascript is not being recognised. Normally, javascript inside an updatepanel is executed when the updatepanel posts back, however none of this code can be found by the page (I've tried running the function manually via firebug's console, but it tells me it cannot find the function). Does anyone have any suggestions? Cheers, Ed. EDIT: cut down (but functional) example: Markup: <script src="/js/jquery-1.3.2.min.js"></script> <form id="form1" runat="server"> <div> <asp:ScriptManager ID="Script" runat="server" /> <asp:Button ID="Postback" runat="server" Text="Populate" OnClick="PopulatePlaceholder" /> <asp:UpdatePanel ID="UpdateMe" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="Postback" EventName="Click" /> </Triggers> <ContentTemplate> <asp:Literal ID="Code" runat="server" /> <asp:PlaceHolder ID="PlaceMe" runat="server" /> </ContentTemplate> </asp:UpdatePanel> </div> </form> C#: protected void PopulatePlaceholder(object sender, EventArgs e) { Button button = new Button(); button.ID = "Push"; button.Text = "push"; button.OnClientClick = "javascript:return false;"; Code.Text = "<script type=\"text/javascript\"> function bindEvents() { $('#" + button.ClientID + "').click(function() { alert('hello'); }); } bindEvents(); </script>"; PlaceMe.Controls.Add(button); }

    Read the article

  • can we use both custom button and inbuilt button in datagridview

    - by Srikanth Mattihalli
    HI all, I am using Datagridview in asp.net. I have used custom buttons of up and down in the datagridview along with edit,delete and paging options. I am handling the up down buttons by raising events in rowcommand and the code is as below string command = e.CommandName; Response.Write(e.CommandArgument.ToString()); int index = Convert.ToInt32(e.CommandArgument.ToString()); int count = GridView1.Rows.Count; int keyValue = Convert.ToInt32(GridView1.Rows[index].Cells1.Text); string value = GridView1.Rows[index].Cells[4].Text; SqlConnection conn = new SqlConnection(SqlDataSource1.ConnectionString); SqlCommand cmd = new SqlCommand(); if (command == "up") { if (index > 0) { index = index - 1; int keyValue1 = Convert.ToInt32(GridView1.Rows[index].Cells[1].Text); string value1 = GridView1.Rows[index].Cells[4].Text; cmd.Connection = conn; cmd.CommandText = "UPDATE [category] SET [order_id] = '" + value + "' WHERE [category_id]=" + keyValue1 + ";UPDATE [category] SET [order_id] = '" + value1 + "' WHERE [category_id]=" + keyValue + ";"; conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } } else if (command == "down") { if (index < count - 1) { index = index + 1; int keyValue1 = Convert.ToInt32(GridView1.Rows[index].Cells[1].Text); string value1 = GridView1.Rows[index].Cells[4].Text; cmd.Connection = conn; cmd.CommandText = "UPDATE [category] SET [order_id] = '" + value + "' WHERE [category_id]=" + keyValue1 + ";UPDATE [category] SET [order_id] = '" + value1 + "' WHERE [category_id]=" + keyValue + ";"; conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } } Response.Redirect("Default.aspx"); Designer file " DeleteCommand="DELETE FROM [category] WHERE [category_id] = @category_id" InsertCommand="INSERT INTO [category] ([categoryname], [navigation_url], [order_id]) VALUES (@categoryname, @navigation_url, @order_id)" SelectCommand="SELECT * FROM [category] order by order_id" UpdateCommand="UPDATE [category] SET [categoryname] = @categoryname, [navigation_url] = @navigation_url, [order_id] = @order_id WHERE [category_id] = @category_id" After this my edit,delete and paging is not working bcoz of event conflicts. Can anyone plz help me on this, so that i will be able to use both custom buttons(up and down) and edit,delete and paging features.

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • How can I capture the keystroke that triggers "CellEndEdit" on a DataGridView in C#?

    - by Andy Stampor
    I have a DataGridView that is set to EditOnF2. I do some special processing of data in the CellEndEdit eventhandler that sets the value of the cell. I still want the functionality of the EditOnKeystrokeOrF2 of reverting to the original value when the Esc key is pressed. Unfortunately, at the CellEndEdit eventhandler, I don't see a way to tell what caused the CellEndEdit event to be fired. I only want to change the value of the cell if the Esc key is not pressed. How can I tell if it was or not? Edit: It is worth noting that the KeyDown event does not get fired when the cell is being edited, nor for the final ending keystroke. Edit2: I have tried the KeyPreview suggestion, but the form still does not capture the Escape key being pressed. Edit3: I've been experimenting with trying to get this working. I originally posted some of the following as a separate post, but feel it might be more relevant to include it here. I have a cell in a DataGridView that is now set to EditProgrammatically. To capture the keystroke that starts an edit, I am setting the cell.Value equal to the keystroke. However, this ruins the "Escape" functionality of the cell - when you press escape, instead of reverting to the original value, it reverts to the keystroke that I programmatically inserted into the cell. I believe that if I could set the "EditedFormattedValue" on a cell, this would be where I want to put my keystroke value, however this appears to be read only. How can I accomplish what I am attempting? An example to clarify: If the cell has a value of "54.3" in it, and I press the "9" key, it begins editing the cell and places a "9" there. If I hit Escape, instead of reverting to "54.3" it reverts to "9". What I want is for it to return to its original value of "54.3". So, I am trying to tackle this issue from both the beginning and the end. I think the real problem is that I am overwriting the original value and have no way to determine if I should revert it or not. Edit4: It looks like CellValidating might be worth using, but I am seeing strange behavior when I experiment with it. In a new project, I create the DataGridView and register for the various events and see that CellValidating is fired before the CellEndEdit. However, in my project where I am trying to get this to work, CellEndEdit is firing BEFORE CellValidating. Any ideas on what the difference might be?

    Read the article

< Previous Page | 817 818 819 820 821 822 823 824 825 826 827 828  | Next Page >