Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 640/837 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • How can I stop SQL Server Reporting Services 2008 going to sleep?

    - by Nick
    I have SSRS 2008 set-up on a server. All works fine except that if left inactive for a length of time the next time a request is made to the server it takes a long time for it to service it. I think this is to do with the worker process being shutdown after being idle for a certain length of time. However, as SSRS 2008 isn't managed through IIS I can't find any settings that I can adjust to stop this from happening. In IIS I'd go to the Performance tab of the Application Pool Properties and choose not to shutdown the worker process. How can I do this for SSRS 2008?

    Read the article

  • Silverlight Threading and its usage

    - by Harryboy
    Hello Experts, Scenario : I am working on LOB application, as in silverlight every call to service is Async so automatically UI is not blocked when the request is processed at server side. Silverlight also supports threading as per my understanding if you are developing LOB application threads are most useful when you need to do some IO operation but as i am not using OOB application it is not possible to access client resource and for all server request it is by default Async. In above scenario is there any usage of Threading or can anyone provide some good example where by using threading we can improve performance. I have tried to search a lot on this topic but everywhere i have identified some simple threading example from which it is very difficult to understand the real benefit. Thanks for help

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • Best way to cache resized images using PHP and MySQL

    - by Chris Hawes
    What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.)

    Read the article

  • Returning IQueryable or Enumerated Object

    - by Tarik
    Hello everyone, I was wondering about the performance difference between these two scenarios and what could the disadvantages be over each other? First scenario : public class Helper //returns IQueryable { public IQueryable<Customer> CurrentCustomer { get{return new DataContext().Where(t=>t.CustomerId == 1); } } public class SomeClass { public void Main() { Console.WriteLine(new Helper().CurrentCustomer.First().Name; } } The second scenario : public class Helper //returns Enumerated result { public Customer CurrentCustomer { get{return new DataContext().First(t=>t.CustomerId == 1); } } public class SomeClass { public void Main() { Console.WriteLine(new Helper().CurrentCustomer.Name; } } Thanks in advance.

    Read the article

  • web2py or grok (zope) on a big portal,

    - by Robert
    Hi, I am planning to make some big project (1 000 000 users, approximately 500 request pre second - in hot time). For performance I'm going to use no relational dbms (each request could cost lot of instructions in relational dbms like mysql) - so i can't use DAL. My question is: how web2py is working with a big traffic, is it work concurrently? I'm consider to use web2py or Gork - Zope, How is working zodb(Z Object Database) with a lot of data? Is there some comparison with object-relational postgresql? Could you advice me please.

    Read the article

  • Problem with Writing files using FileWriter automatically with Quartz Scheduler

    - by Jeeva
    I have chosen nearly 200 files to write on a position automatically on a particular time. Created a separate job names in Quartz scheduler. The job will be triggered on a time. I can read the files only after all the files have been written. I could not read after one file is written. I have closed the FileWriter after one file written. What is the solution to access the file and read which have been written into the hard Disk

    Read the article

  • Difference between Thread.Sleep(0) and Thread.Yield()

    - by Xose Lluis
    As Java has had Sleep and Yield from long ago, I've found answers for that platform, but not for .Net .Net 4 includes the new Thread.Yield() static method. Previously the common way to hand over the CPU to other process was Thread.Sleep(0). Apart from Thread.Yield() returning a boolean, are there other performance, OS internals differences? For example, I'm not sure if Thread.Sleep(0) checks if other thread is ready to run before changing the current Thread to waiting state... if that's not the case, when no other threads are ready, Thread.Sleep(0) would seem rather worse that Thread.Yield().

    Read the article

  • Delphi Pascal - Using SetFilePointerEx and GetFileSizeEx, Getting Physical Media exact size when reading as a file

    - by SuicideClutchX2
    I am having trouble understanding how to delcare GetFileSizeEx and SetFilePointerEx in Delphi 2009 so that I can use them since they are not in the RTL or Windows.pas. I was able to compile with the following: function GetFileSizeEx(hFile: THandle; lpFileSizeHigh: Pointer): DWORD; external 'kernel32'; Then using GetFileSizeEx(PD, Pointer(DriveSize)); to get the size. But could not get it to work, the disk handle I am using is valid and I have had no problem reading the data or working under the 2gb mark with the older API's. GetFileSize of course returns 4294967295. I have had greater trouble trying to use SetFilePointerEx with the data types it uses. The overall project needs to read the data from a flash card, which is not a problem at all I can do this. My problem is that I can not find the length or size of the media I will be reading. I have code I have used in the past to do this with media under 2GB. But now that I need to read media over 2GB it is a problem. If you still dont understand I am dumping a card with all data including the boot record, etc. This is the code I would normally use to read from the physical disk to grab say the boot record and dump it to file: SetFilePointer(PD,0,nil,FILE_BEGIN); SetLength(Buffer,512); ReadFile(PD,Buffer[0],512,BytesReturned,nil); I just need to figure out how to find the end of an 8gb card and so on as well as being able to set a file pointer beyond the 2gb barrier. I guess any help in the external declarations as well as understand the values that SetFilePointerEx uses (I do not understand the whole High Low thing) would be of great help. var Form1: TForm1; function GetFileSizeEx(hFile: THandle; var FileSize: Int64): DWORD; stdcall; external 'kernel32'; implementation {$R *.dfm} function GetLD(Drive: Char): Cardinal; var Buffer : String; begin Buffer := Format('\\.\%s:',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPD(Drive: Byte): Cardinal; var Buffer : String; begin If Drive = 0 Then begin Result := INVALID_HANDLE_VALUE; Exit; end; Buffer := Format('\\.\PHYSICALDRIVE%d',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPhysicalDiskNumber(Drive: Char): Byte; var LD : DWORD; DiskExtents : PVolumeDiskExtents; DiskExtent : TDiskExtent; BytesReturned : Cardinal; begin Result := 0; LD := GetLD(Drive); If LD = INVALID_HANDLE_VALUE Then Exit; Try DiskExtents := AllocMem(Max_Path); DeviceIOControl(LD,IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS,nil,0,DiskExtents,Max_Path,BytesReturned,nil); If DiskExtents^.NumberOfDiskExtents > 0 Then begin DiskExtent := DiskExtents^.Extents[0]; Result := DiskExtent.DiskNumber; end; Finally CloseHandle(LD); end; end; procedure TForm1.Button1Click(Sender: TObject); var PD : DWORD; BytesReturned : Cardinal; Buffer : Array Of Byte; myFile: File; DriveSize: Int64; begin PD := GetPD(GetPhysicalDiskNumber(Edit1.Text[1])); If PD = INVALID_HANDLE_VALUE Then Exit; Try GetFileSizeEx(PD, DriveSize); //SetFilePointer(PD,0,nil,FILE_BEGIN); //etLength(Buffer,512); //ZeroMemory(@Buffer,SizeOf(Buffer)); //ReadFile(PD,Buffer[0],512,BytesReturned,nil); //AssignFile(myFile, 'StickDump.bin'); //ReWrite(myFile, 512); //BlockWrite(myFile, Buffer[0], 1); //CloseFile(myFile); Finally CloseHandle(PD); End; end;

    Read the article

  • how Postfix anti spam configuration works with DNS-based Blackhole List providers

    - by Ashish
    Hello, I have setup a Postfix mail server for incoming mails that is required to never reply to external enviornment i.e it will accept all incoming mails and never reply anything that can be used as a trace to locate and verify it's existence. I have implemented the Postfix anti-UCE configuration by using the following settings in postfix main.cf for countering spam generating mail servers: 'smtpd_recipient_restrictions = reject_rbl_client zen.spamhaus.org, reject_rbl_client bl.spamcop.net' Now i have certain doubts/questions: How Postfix is able to communicate with Black hole list providers i.e How this whole process works?, e.g here they are zen.spamhaus.org, bl.spamcop.net, so that i can test the performance of whole process. Can a header be added in the received mail regarding the status of the results of the above verification process, since i will not reply any traces from my incoming mail receiving Postfix server, so i need this feature? Please post relevant links for reference. Thanks in advance!!! Ashish

    Read the article

  • How do I pass credentials to a machine so I can use Microsoft.Win32.RegistryKey.OpenRemoteBaseKey()

    - by JCCyC
    This .NET API works OK if I'm trying to open the Registry in a machine that's in the same domain as I am (and my logged-on user has admin rights on the target machine). It gets tricky if it's an out-of-domain machine with a different, local administrative user (of whom I do have the password). I tried to use WNetUseConnection() (which has served me well in the past in situations where what I wanted was to read a remote disk file) prior to calling OpenRemoteBaseKey(), but no dice -- I get an access denied exception. Clearly, I must pass credentials some other way, but how?

    Read the article

  • SQL Server: Profiling statements inside a User-Defined Function

    - by Craig Walker
    I'm trying to use SQL Server Profiler (2005) to track down some application performance problems. One of the calls being made is to a table-valued user-defined function. This function wraps a select that joins several tables together. In SQL Server Profiler, the call to the UDF is logged. However, the select that underlies the UDF isn't being logged at all. Because of this, I'm not getting useful data on which tables & indexes are being hit. I'd like to feed this info into the Database Tuning Advisor for some indexing advice. Is there any way (short of unwrapping the queries themselves) to log the tables called by UDFs in Profiler?

    Read the article

  • Nhibernate equivalent of LinqToEntitiesDomainService in RIA

    - by VexXtreme
    Hi, When using Entity Framework with RIA domain services, domain services are inherited from LinqToEntitiesDomainService, which, I suppose, allows you to make linq queries on a low level (client-side) which propagate into ORM; meaning that all queries are performed on the database and only relevant results are retrieved to the server and thus the client. Example: var query = context.GetCustomersQuery().Where(x => x.Age > 50); Right now we have a domain service which inherits from DomainService, and retrieves data through NHibernate session as in: virtual public IQueryable<Customer> GetCustomers() { return sessionManager.Session.Linq<Customer>(); } The problem with this approach is that it's impossible to make specific queries without retrieving entire tables to the server (or client) and filtering them there. Is there a way to make linq querying work with NHibernate over RIA like it works with EF? If not, we're willing to switch to EF because of this, because performance impact would be just too severe. Thanks

    Read the article

  • Fastest way to convert datatable to generic list

    - by Joel Coehoorn
    I have a data tier select method that returns a datatable. It's called from a business tier method that should then return a strongly typed generic List. What I want to do is very similar (but not the same as) this question: http://stackoverflow.com/questions/208532/how-do-you-convert-a-datatable-into-a-generic-list What's different is that I want the list to contain strongly-typed objects rather than datarows (also, I don't have linq avaiable here yet). I'm concerned about performance. The business tier method will in turn be called from the presentation tier, and the results will be iterated for display to the user. It seems very wasteful to add an extra iteration at the business tier, only do it again right away for the presentation, so I want this to be as quick as possible. This is a common task, so I'm really looking for a good pattern that can be repeated over and over.

    Read the article

  • What is this message?

    - by kumar
    **c:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(0,0): warning MSB3245: Could not resolve this reference. Could not locate the assembly "System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. Can any body tell me what is this mesage? when I am deploying my ASP.net MVC application using TFS i am getting this error mesage? Thanks

    Read the article

  • FlowDocument contents as text

    - by tyndall
    What is the best way to get back the XAML/XML value of a FlowDocument? I noticed there isn't a .Value, .Text, .Caption, .ToXml(), etc... UDPATE: I'd like to be able to get access to it initially to serialize to disk or database. Treat it as its own document format. Later translating it to other formats would be nice. Also been wondering: Any equivalent to a hyperlink (opens in new browser window) in a FlowDocument? Any workaround?

    Read the article

  • Invoking .Net COM assembly from Powerbuilder application (without registration)

    - by as
    We have a Powerbuilder 10 application that is using .Net COM assemblies. We are trying to embed the manifest in the PB application (to invoke COM assemblies without registration). The merged manifest file has added sections for dependecies on the .Net COM assemblies. We have tries various tools to inject the new manifest with different results - using GenMan32 to inject truncates the application from 6MB to 45KB. - using ResourceTuner, the file size looks okay, but trying to launch application gives "Fatal Disk Error". Any suggestions on invoked .Net ComEnabled assembly from PB without registration?

    Read the article

  • Difference between performSelectorInBackground and NSOperation Subclass

    - by AmitSri
    I have created one testing app for running deep counter loop. I run the loop fuction in background thread using performSelectorInBackground and also NSOperation subclass separately. I am also using performSelectorOnMainThread to notify main thread within backgroundthread method and [NSNotificationCenter defaultCenter] postNotificationName within NSOperation subclass to notify main thread for updating UI. Initially both the implementation giving me same result and i am able to update UI without having any problem. The only difference i found is the Thread count between two implementations. The performSelectorInBackground implementation created one thread and got terminated after loop finished and my app thread count again goes to 1. The NSOperation subclass implementation created two new threads and keep exists in the application and i can see 3 threads after loop got finished in main() function. So, my question is why two threads created by NSOperation and why it didn't get terminated just like the first background thread implementation? I am little bit confuse and unable to decide which implementation is best in-terms of performance and memory management. Thanks

    Read the article

  • Cassandra and asp.net (C#)

    - by Sergey Osypchuk
    I am interested to create portal on cassandra services, since I faced some performance and scale issues starting from 1 million of records. Definitely, it could be solved, but I am interested on other options. My main issues is cost of updating all necessary indexes, to make reading fast. First, is cassandra is good way for asp.net programmers? I mean, maybe there is some other projects, which worth to take a look And second, can you provide any documentation samples on how to start with cassandra programming from C#?

    Read the article

  • Why is WPFToolkit DataGrid so slow when binding?

    - by Schneider
    I have a very simple test application where I have two objects, each with a small collection of items. when I select an object I display its collection in a WPFToolkit DataGrid. The problem is there is a noticeable delay, such that if you press up/down keys to toggle selection between objects you can see it can't keep up. Why is the performance so bad? <Window x:Class="SlowGridBinding.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" Title="MainWindow" Height="350" Width="525"> <StackPanel> <ListBox ItemsSource="{Binding Shops}" DisplayMemberPath="Name" IsSynchronizedWithCurrentItem="True"/> <Controls:DataGrid ItemsSource="{Binding Shops/Vegetables}" AutoGenerateColumns="True"/> </StackPanel> The DataContext is populated with some test classes filled with 50 items of random test data.

    Read the article

  • What are modern and old compilers written in?

    - by ulum
    As a compiler, other than an interpreter, only needs to translate the input and not run it the performance of itself should be not that problematic as with an interpreter. Therefore, you wouldn't write an interpreter in, let's say Ruby or PHP because it would be far too slow. However, what about compilers? If you would write a compiler in a scripting language maybe even featuring rapid development you could possibly cut the source code and initial development time by halv, at least I think so. To be sure: With scripting language I mean interpreted languages having typical features that make programming faster, easier and more enjoyable for the programmer, usually at least. Examples: PHP, Ruby, Python, maybe JavaScript though that may be an odd choice for a compiler What are compilers normally written in? As I suppose you will respond with something low-level like C, C++ or even Assembler, why? Are there compilers written in scripting languages? What are the (dis)advantages of using low or high level programming languages for compiler writing?

    Read the article

  • Why are listener lists Lists?

    - by Joonas Pulakka
    Why are listener lists (e.g. in Java those that use addXxxListener() and removeXxxListener() to register and unregister listeners) called lists, and usually implemented as Lists? Wouldn't a Set be a better fit, since in the case of listeners there's No matter in which order they get called (although there may well be such needs, but they're special cases; ordinary listener mechanisms make no such guarantees), and No need to register the same listener more than once (whether doing that should result in calling the same listener 1 times or N times, or be an error, is another question) Is it just a matter of tradition? Sets are some kind of lists under the hood anyway. Are there performance differences? Is iterating through a List faster or slower than iterating through a Set? Does either take more or less memory? The differences are certainly almost negligible.

    Read the article

  • What is the leading LINQ for JavaScript library?

    - by Tom Tresansky
    I'm looking for a JavaScript library that will allow me to query complex JSON objects using a LINQ-like syntax. A quick search found a couple of promising options that look they might offer what I need: LINQ to JavaScript and jLinq Does any one have any experience using them? What are some pros and cons? Is the performance comparable? Does the function-passing syntax of LINQ to JavaScript offer any hidden benefits (I personally find the syntax of jLinq more appealing on first glance)? What have you found lacking in either project? Did you ever try contacting the authors? How responsive were they? What project is more widely used?

    Read the article

  • jquery error() calls showing up in firebug profile

    - by Aros
    I am working on an ASP.NET application that make a lot of jquery and javascript calls and trying to optimize the client side code as much as possible. (This web application is only designed to run on special hardware that has very low memory and processing power.) The profiler in firebug is great for figuring out what calls are taking up the most time. I have already optimized a lot of my selectors and it is much faster. However the profile shows a lot of jquery error() calls. In the attached image of the firebug profile window you can see it was called 52 times, accounting for 15.4 of the processing time. Is that normal for jquery to call its error() like that? My code works flawlessy, and there are no error messages in the firefox error console. It seems like that is a significant performance hit. Is there anyway to get more info on what the errors are? Thanks.

    Read the article

  • Ext GWT (GXT) tooltip over a grid row

    - by Eduardo Palma
    I'm developing a custom tooltip using Ext GWT (GXT) for a project of mine, and this tooltip has to appear over Grid rows when they're selected. I can't use the default GXT tooltip or quicktip because I need be able to add Components (like buttons) to this tooltip. The problem is that the GXT Grid component doesn't expose a event related to mousing over a row (although there's RowClick and RowMouseDown). I tried adding a listener to the Grid with the OnMouseOver and OnMouseOut events anyway, but it doesn't work as expected. It fires these events up whenever you mouse over any of the divs and spans that composes a row. The only way I see to solve this is to subclass the GridView component and make each row become a Component itself, but that would be a lot of work and would probably impact performance as well. I can't help but think there's a better way to do this. Could someone more experienced with GXT give me a light?

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >