Search Results

Search found 19393 results on 776 pages for 'reference count'.

Page 594/776 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Why doesn't my cursor change to an Hourglass in my FindDialog in Delphi?

    - by lkessler
    I am simply opening my FindDialog with: FindDialog.Execute; In my FindDialog.OnFind event, I want to change the cursor to an hourglass for searches through large files, which may take a few seconds. So in the OnFind event I do this: Screen.Cursor := crHourglass; (code that searches for the text and displays it) ... Screen.Cursor := crDefault; What happens is while searching for the text, the cursor properly changes to the hourglass (or rotating circle in Vista) and then back to the pointer when the search is completed. However, this only happens on the main form. It does not happen on the FindDialog itself. The default cursor remains on the FindDialog during the search. While the search is happening if I move the cursor over the FindDialog it changes to the default, and if I move it off and over the main form it becomes the hourglass. This does not seem like what is supposed to happen. Am I doing something wrong or does something special need to be done to get the cursor to be the hourglass on all forms? For reference, I'm using Delphi 2009.

    Read the article

  • How to scroll programmetically to selected cover in coverflow view ?

    - by hib
    Hi all , I am using a coverflow component for my application . It is working nicely till now . But now I want to scroll to selected cover in coverflow . I am currently doing this : TKCoverView *cover = [self.coverflow coverAtIndex:currentIndex]; CGRect frame = cover.frame ; frame.origin.x = frame.origin.x * 1.2; // **main setting** [self.coverflow bringCoverAtIndexToFront:currentIndex animated:YES]; [self.coverflow scrollRectToVisible:frame animated:YES]; and the cover At index method - (TKCoverView*) coverflowView:(TKCoverflowView*)coverflowView coverAtIndex:(int)index{ TKCoverView *cover = [coverflowView dequeueReusableCoverView]; NSLog(@"Index value :%i",index); if(cover == nil){ cover = [[[TKCoverView alloc] initWithFrame:CGRectMake(0, 0, 224, 300)] autorelease]; // 224 cover.baseline = 224; cover.image = [UIImage imageNamed:@"Placeholder_2.png"]; cover.tag = index; }else { cover.image = ((object *)[array objectAtIndex:index]).appIconNormal; } return cover; and - (TKCoverView*) dequeueReusableCoverView{ if(yard == nil || [yard count] < 1) return nil; TKCoverView *v = [[[yard lastObject] retain] autorelease]; v.layer.transform = CATransform3DIdentity; [yard removeLastObject]; return v; } So can any one tell me what is the perfect and standard way of setting the frame for the scrollRectToVisibleRegion method .

    Read the article

  • How to correctly set the ORACLE_HOME variable on Ubuntu 9.x?

    - by nowarninglabel
    I have the same problem as listed here: http://stackoverflow.com/questions/52239/oracle-lost-sysdba-password although I did not lose the password, I entered it twice in the configure script originally, and then when I went to login (localhost:8080/apex, password not accepted. I don't have anything in the database, I just want to install and use Oracle-XE. I have tried apt-get removing it twice and reinstalling, but if I try to run /etc/init.d/oracle-xe configure again and I get "Oracle Database 10g Express Edition is already configured" despite the second time removing any folders I could find for Oracle XE. I tried running sqlplus "/ as sysdba" but all I get is: "Error 6 initializing SQL*Plus Message file sp1.msb not found SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory" I tried setting the variable via export. (also tried set). Tried: "export ORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin/sqlplus" and all the subdirectories of that. Same error every time. What is the ORACLE_HOME supposed to be set to? The only reference I have seen either just say general or say the above up to the version number then "/db_1". I do no thave a db_1. Let me know if need any clarification. Don't understand what I did wrong in this process.

    Read the article

  • Episerver Scheduled Job fails (scheduler service)

    - by Igor
    Our scheduled jobs started failing since yesterday with the following error message: CustomUpdate.Execute - System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Security.Roles.GetRolesForUser(String username) at EPiServer.Security.PrincipalInfo.CreatePrincipal(String username) The scheduled job uses anonymous execution and logs in programmatically using the following call: if (PrincipalInfo.CurrentPrincipal.Identity.Name == string.Empty) { PrincipalInfo.CurrentPrincipal = PrincipalInfo.CreatePrincipal(ApplicationSettings.ScheduledJobUsername); } I have put in some more logging around PrincipalInfo.CreatePrincipal call which is in Episerver.Security and noticed that PrincipalInfo.CreatePrincipal calls System.Web.Security.Roles.GetRolesForUser(username) and Roles.GetRolesForUser(username) returns an empty string array. There were no changes code wise or on the server (updates, etc). I checked that the user name used to run the task is in the database and has roles associated with it. I checked that applicationname is set up correctly and is associated with the user If i run the job manually using the same user it executes with no issues (i know there is a difference between running the job manually and using the scheduler) I also tried creating a new user, that didn’t work either. Has anyone come across the same or similar issue? Any thoughts how to resolve this issue?

    Read the article

  • SqlBulkCopy causes Deadlock on SQL Server 2000.

    - by megatoast
    I have a customized data import executable in .NET 3.5 which the SqlBulkCopy to basically do faster inserts on large amounts of data. The app basically takes an input file, massages the data and bulk uploads it into a SQL Server 2000. It was written by a consultant who was building it with a SQL 2008 database environment. Would that env difference be causing this? SQL 2000 does have the bcp utility which is what BulkCopy is based on. So, When we ran this, it triggered a Deadlock error. Error details: Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. I've tried numerous ways to try to resolve it. like temporarily setting the connection string variable MultipleActiveResultSets=true, which wasn't ideal, but it still gives a Deadlock error. I also made sure it wasn't a connection time out problem. here's the function. Any advice? /// <summary> /// Bulks the insert. /// </summary> public void BulkInsert(string destinationTableName, DataTable dataTable) { SqlBulkCopy bulkCopy; if (this.Transaction != null) { bulkCopy = new SqlBulkCopy ( this.Connection, SqlBulkCopyOptions.TableLock, this.Transaction ); } else { bulkCopy = new SqlBulkCopy ( this.Connection.ConnectionString, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction ); } bulkCopy.ColumnMappings.Add("FeeScheduleID", "FeeScheduleID"); bulkCopy.ColumnMappings.Add("ProcedureID", "ProcedureID"); bulkCopy.ColumnMappings.Add("AltCode", "AltCode"); bulkCopy.ColumnMappings.Add("AltDescription", "AltDescription"); bulkCopy.ColumnMappings.Add("Fee", "Fee"); bulkCopy.ColumnMappings.Add("Discount", "Discount"); bulkCopy.ColumnMappings.Add("Comment", "Comment"); bulkCopy.ColumnMappings.Add("Description", "Description"); bulkCopy.BatchSize = dataTable.Rows.Count; bulkCopy.DestinationTableName = destinationTableName; bulkCopy.WriteToServer(dataTable); bulkCopy = null; }

    Read the article

  • Pseudo random numbers in php

    - by pg
    I have a function that outputs items in a different order depending on a random number. for example 1/2 of the time Popeye's and its will be #1 on the list and Taco Bell and its logo will be #2 and half the time the it will be the other way around. The problem is that when a user reloads or comes back to the page, the order is re-randomized. $Range here is the number of items in the db, so it's using a random number between 1 and the $range. $random = mt_rand(1,$range); for ($i = 0 ; $i < count($variants); $i++) { $random -= $variants[$i]['weight']; if ($random <= 0) { $chosenoffers[$tag] = $variants[$i]; break; } } I went to the beginning of the session and set this: if (!isset($_SESSION['rannum'])){ $_SESSION['rannum']=rand(1,100); } With the idea that I could replace the mt_rand in the function with some sort of pseudo random generator that used the same 1-100 random number as a seed throughout the session. That way i won't have to rewrite all the code that was already written. Am I barking up the wrong tree or is this a good idea?

    Read the article

  • Cannot connect Linux XAMPP PHP to SQL Server database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and SQL Server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • The cost of passing by shared_ptr

    - by Artem
    I use std::tr1::shared_ptr extensively throughout my application. This includes passing objects in as function arguments. Consider the following: class Dataset {...} void f( shared_ptr< Dataset const > pds ) {...} void g( shared_ptr< Dataset const > pds ) {...} ... While passing a dataset object around via shared_ptr guarantees its existence inside f and g, the functions may be called millions of times, which causes a lot of shared_ptr objects being created and destroyed. Here's a snippet of the flat gprof profile from a recent run: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 9.74 295.39 35.12 2451177304 0.00 0.00 std::tr1::__shared_count::__shared_count(std::tr1::__shared_count const&) 8.03 324.34 28.95 2451252116 0.00 0.00 std::tr1::__shared_count::~__shared_count() So, ~17% of the runtime was spent on reference counting with shared_ptr objects. Is this normal? A large portion of my application is single-threaded and I was thinking about re-writing some of the functions as void f( const Dataset& ds ) {...} and replacing the calls shared_ptr< Dataset > pds( new Dataset(...) ); f( pds ); with f( *pds ); in places where I know for sure the object will not get destroyed while the flow of the program is inside f(). But before I run off to change a bunch of function signatures / calls, I wanted to know what the typical performance hit of passing by shared_ptr was. Seems like shared_ptr should not be used for functions that get called very often. Any input would be appreciated. Thanks for reading. -Artem

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • XML Serialize and Deserialize Problem XML Structure

    - by Ph.E
    Camarades, I'm having the following problem. Caught a list Struct, Serialize (Valid W3C) and send to a WebService. In the WebService I receive, transform to a string, valid by the W3C and then Deserializer, but when I try to run it, always occurs error, saying that some objects were not closed. Any help? Sent Code: #region ListToXML private XmlDocument ListToXMLDocument(object __Lista) { XmlDocument _ListToXMLDocument = new XmlDocument(); try { XmlDocument _XMLDoc = new XmlDocument(); MemoryStream _StreamMem = new MemoryStream(); XmlSerializer _XMLSerial = new XmlSerializer(__Lista.GetType()); StreamWriter _StreamWriter = new StreamWriter(_StreamMem, Encoding.UTF8); _XMLSerial.Serialize(_StreamWriter, __Lista); _StreamMem.Position = 0; _XMLDoc.Load(_StreamMem); if (_XMLDoc.ChildNodes.Count > 0) _ListToXMLDocument = _XMLDoc; } catch (Exception __Excp) { new uException(__Excp).GerarLogErro(CtNomeBiblioteca); } return _ListToXMLDocument; } #endregion Receive Code: #region XMLDocumentToTypedList private List<T> XMLDocumentToTypedList<T>(string __XMLDocument) { List<T> _XMLDocumentToTypedList = new List<T>(); try { XmlSerializer _XMLSerial = new XmlSerializer(typeof(List<T>)); MemoryStream _MemStream = new MemoryStream(); StreamWriter _StreamWriter = new StreamWriter(_MemStream, Encoding.UTF8); _StreamWriter.Write(__XMLDocument); _MemStream.Position = 0; _XMLDocumentToTypedList = (List<T>)_XMLSerial.Deserialize(_MemStream); return _XMLDocumentToTypedList; } catch (Exception _Ex) { new uException(_Ex).GerarLogErro(CtNomeBiblioteca); throw _Ex; } } #endregion

    Read the article

  • Web.Routing for the site root or homepage

    - by Aquinas
    I am doing some work with Web.Routing, using it to have friendly urls and nice Rest like interfaces to a site that is essentially rendered by a single IHttpHandler. There are no webforms, the handler generates all the html/json and writes it as part of process request. This works well for things like /Sites/Accounting for example, but I can't get it to work for the site root, i.e. '/'. I have tried registering a route with an empty string, with 'default.aspx' (which is the empty aspx file I keep in my root folder to play nice with cassini and iis). I set RouteExistingFiles to false explicitly, but whatever I do when hitting the root url it still opens default.axpx, which has no code it inherits from, and contains a simple h1 tag to show that I've hit it. I don't want to change the default file to redirect to a desired route, I just want the equivalent of a 'default' route that is applied when no other routes are found, similar to MVC. For reference, the previous version of the site didn't use Web.Routing, but had a handler referenced in the web.config that was perfectly capable of intercepting requests for the root or default.aspx. Specs: ASP.NET 3.5sp1, C#, no webforms, MVC or openrasta. Plain old IHttpHandlers.

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • error in implementing static files in django

    - by POOJA GUPTA
    my settings.py file:- STATIC_ROOT = '/home/pooja/Desktop/static/' # URL prefix for static files. STATIC_URL = '/static/' # Additional locations of static files STATICFILES_DIRS = ( '/home/pooja/Desktop/mysite/search/static', ) my urls.py file:- from django.conf.urls import patterns, include, url from django.contrib.staticfiles.urls import staticfiles_urlpatterns from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^search/$','search.views.front_page'), url(r'^admin/', include(admin.site.urls)), ) urlpatterns += staticfiles_urlpatterns() I have created an app using django which seraches the keywords in 10 xml documents and then return their frequency count displayed as graphical representation and list of filenames and their respective counts.Now the list has filenames hyperlinked, I want to display them on the django server when user clicks them , for that I have used static files provision in django. Hyperlinking has been done in this manner: <ul> {% for l in list1 %} <li><a href="{{STATIC_URL}}static/{{l.file_name}}">{{l.file_name}}</a{{l.frequency_count</li> {% endfor %} </ul> Now when I run my app on the server, everything is running fine but as soon as I click on the filename, it gives me this error : Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order: ^search/$ ^admin/ ^static\/(?P<path>.*)$ The current URL, search/static/books.xml, didn't match any of these. I don't know why this error is coming, because I have followed the steps required to achieve this. I have posted my urls.py file and it is showing error in that only. I'm new to django , so Please help

    Read the article

  • xsd.exe - schema to class - for use with WCF

    - by NealWalters
    I have created a schema as an agreed upon interface between our company and an external company. I am now creating a WCF C# web service to handle the interface. I ran the XSD utility and it created a C# class. The schema was built in BizTalk, and references other schemas, so all-in-all there are over 15 classes being generated. I put [DataContract} attribute in front of each of the classes. Do I have to put the [DataMember] attribute on every single property? When I generate a test client program, the proxy does not have any code for any of these 15 classes. We used to use this technique when using .asmx services, but not sure if it will work the same with WCF. If we change the schema, we would want to regenerate the WCF class, and then we would haev to each time redecorate it with all the [DataMember] attributes? Is there an newer tool similar to XSD.exe that will work better with WCF? Thanks, Neal Walters SOLUTION (buried in one of Saunders answer/comments): Add the XmlSerializerFormat to the Interface definition: [OperationContract] [XmlSerializerFormat] // ADD THIS LINE Transaction SubmitTransaction(Transaction transactionIn); Two notes: 1) After I did this, I saw a lot more .xsds in the my proxy (Service Reference) test client program, but I didn't see the new classes in my intellisense. 2) For some reason, until I did a build on the project, I didn't get all the classes in the intellisense (not sure why).

    Read the article

  • JSF Render response programmatically

    - by Shamik
    I have one parent page with a parentManagedBean (attached to Session Scope). On click of a button on this parent page, one popup comes which has a childManagedBean (attached to Request scope). Now ChildManagedBean holds a reference to parentManaged bean through JSF's managed property facility. On this popup window, user selects some option which populates a large value object. I use the managed property of childManagedBean to set the values from this large object to that of parentManagedBean. Problem is - The parent page shows a link, on click of which a popup comes, on selection of the popup, the popup disappears and set the values to the parentManaged bean. So far so good, but the newly set values need to appear on the parent page. This is where I am stuck. How to programmatically render the master page/render page when I am at the child managed bean? Is there a way I can get handle of the parent page and refresh it? Note: I'm using JSF 1.1 EDIT- After following the solution of "resubmit-ing the form" from javascript, I am seeing that the old form is getting resubmitting which overwrites all of my changed values.

    Read the article

  • Improve speed of own debug visualizer for Delphi 2010

    - by netcodecz
    I wrote Delphi debug visualizer for TDataSet to display values of current row, source + screenshot: http://delphi.netcode.cz/text/tdataset-debug-visualizer.aspx . Working good, but very slow. I did some optimalization (how to get fieldnames) but still for only 20 fields takes 10 seconds to show - very bad. Main problem seems to be slow IOTAThread90.Evaluate used by main code shown below, this procedure cost most of time, line with ** about 80% time. FExpression is name of TDataset in code. procedure TDataSetViewerFrame.mFillData; var iCount: Integer; I: Integer; // sw: TStopwatch; s: string; begin // sw := TStopwatch.StartNew; iCount := StrToIntDef(Evaluate(FExpression+'.Fields.Count'), 0); for I := 0 to iCount - 1 do begin s:= s + Format('%s.Fields[%d].FieldName+'',''+', [FExpression, I]); // FFields.Add(Evaluate(Format('%s.Fields[%d].FieldName', [FExpression, I]))); FValues.Add(Evaluate(Format('%s.Fields[%d].Value', [FExpression, I]))); //** end; if s<> '' then Delete(s, length(s)-4, 5); s := Evaluate(s); s:= Copy(s, 2, Length(s) -2); FFields.CommaText := s; { sw.Stop; s := sw.Elapsed; Application.MessageBox(Pchar(s), '');} end; Now I have no idea how to improve performance.

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • How to temporarily replace one primitive type with another when compiling to different targets in c#

    - by Keith
    How to easily/quickly replace float's for doubles (for example) for compiling to two different targets using these two particular choices of primitive types? Discussion: I have a large amount of c# code under development that I need to compile to alternatively use float, double or decimals depending on the use case of the target assembly. Using something like “class MYNumber : Double” so that it is only necessary to change one line of code does not work as Double is sealed, and obviously there is no #define in C#. Peppering the code with #if #else statements is also not an option, there is just too much supporting Math operators/related code using these particular primitive types. I am at a loss on how to do this apparently simple task, thanks! Edit: Just a quick comment in relation to boxing mentioned in Kyles reply: Unfortunately I need to avoid boxing, mainly since float's are being chosen when maximum speed is required, and decimals when maximum accuracy is the priority (and taking the 20x+ performance hit is acceptable). Boxing would probably rules out decimals as a valid choice and defeat the purpose somewhat. Edit2: For reference, those suggesting generics as a possible answer to this question note that there are many issues which count generics out (at least for our needs). For an overview and further references see Using generics for calculations

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • Call dll - pcshll32.dll using delphi

    - by Davis
    Hi, I need to call hllapi function of pcshll32.dll using delphi. It's works with personal communications of ibm. How can i change the code bellow to delphi ? Thanks !!! The EHLLAPI entry point (hllapi) is always called with the following four parameters: EHLLAPI Function Number (input) Data Buffer (input/output) Buffer Length (input/output) Presentation Space Position (input); Return Code (output) The prototype for IBM Standard EHLLAPI is: [long hllapi (LPWORD, LPSTR, LPWORD, LPWORD); The prototype for IBM Enhanced EHLLAPI is: [long hllapi (LPINT, LPSTR, LPINT, LPINT); Each parameter is passed by reference not by value. Thus each parameter to the function call must be a pointer to the value, not the value itself. For example, the following is a correct example of calling the EHLLAPI Query Session Status function: #include "hapi_c.h" struct HLDQuerySessionStatus QueryData; int Func, Len, Rc; long Rc; memset(QueryData, 0, sizeof(QueryData)); // Init buffer QueryData.qsst_shortname = ©A©; // Session to query Func = HA_QUERY_SESSION_STATUS; // Function number Len = sizeof(QueryData); // Len of buffer Rc = 0; // Unused on input hllapi(&Func, (char *)&QueryData, &Len, &Rc); // Call EHLLAPI if (Rc != 0) { // Check return code // ...Error handling } All the parameters in the hllapi call are pointers and the return code of the EHLLAPI function is returned in the value of the 4th parameter, not as the value of the function.

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • Spinner in Android crashing when visibilty changes while handling OnClick in a button

    - by Dave George
    I have a spinner in UI, which I want to hide when I handle onClick method for a button. But the application is crashing all the time. Is it that I can't use the setVisibility(View.Gone) on spinners (it is not written anywhere). If I comment it out, then application run fine. I am getting NullPointerException and I am using RelativeLayout. Also, can I do this: public void onItemSelected(AdapterView<?> arg0, View view, int pos, long arg3) { // TODO Auto-generated method stub Toast.makeText(SpinnerActivity.this,selected , Toast.LENGTH_SHORT).show(); spinner.setVisibility(View.GONE); } } Spinner code fore reference: itemsCity=getResources().getStringArray(R.array.cities_array); ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource( this, R.array.cities_array, android.R.layout.simple_spinner_item); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinnerCity.setAdapter(adapter); And here is Button code: private class BtClickListner implements View.OnClickListener{ @Override public void onClick(View v) { essEditText.setVisibility(View.GONE); spinnerCity.setEnabled(false);// Getting exception here // Also tried spinnerCity.setVisibility(View.GONE);// Exception SameBt.setVisibility(View.GONE);// This is same button for which I am handliing event, but it allows me to change tis property at run time. }

    Read the article

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • Silverlight HttpWebRequest.Create hangs inside async block

    - by jack2010
    I am trying to prototype a Rpc Call to a JBOSS webserver from Silverlight (4). I have written the code and it is working in a console application - so I know that Jboss is responding to the web request. Porting it to silverlight 4, is causing issues: let uri = new Uri(queryUrl) // this is the line that hangs let request : HttpWebRequest = downcast WebRequest.Create(uri) request.Method <- httpMethod; request.ContentType <- contentType It may be a sandbox issue, as my silverlight is being served off of my file system and the Uri is a reference to the localhost - though I am not even getting an exception. Thoughts? Thx UPDATE 1 I created a new project and ported my code over and now it is working; something must be unstable w/ regard to the F# Silverlight integration still. Still would appreciate thoughts on debugging the "hanging" web create in the old model... UPDATE 2 let uri = Uri("http://localhost./portal/main?isSecure=IbongAdarnaNiFranciscoBalagtas") // this WebRequest.Create works fine let req : HttpWebRequest = downcast WebRequest.Create(uri) let Login = async { let uri = new Uri("http://localhost/portal/main?isSecure=IbongAdarnaNiFranciscoBalagtas") // code hangs on this WebRequest.Create let request : HttpWebRequest = downcast WebRequest.Create(uri) return request } Login |> Async.RunSynchronously I must be missing something; the Async block works fine in the console app - is it not allowed in the Silverlight App?

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >