Search Results

Search found 6365 results on 255 pages for 'anthony short'.

Page 234/255 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Invalid cast exception

    - by user127147
    I have a simple application to store address details and edit them. I have been away from VB for a few years now and need to refreash my knowledge while working to a tight deadline. I have a general Sub responsible for displaying a form where user can add contact details (by pressing button add) and edit them (by pressing button edit). This sub is stored in a class Contact. The way it is supposed to work is that there is a list with all the contacts and when new contact is added a new entry is displayed. If user wants to edit given entry he or she selects it and presses edit button Public Sub Display() Dim C As New Contact C.Cont = InputBox("Enter a title for this contact.") C.Fname = frmAddCont.txtFName.Text C.Surname = frmAddCont.txtSName.Text C.Address = frmAddCont.txtAddress.Text frmStart.lstContact.Items.Add(C.Cont.ToString) End Sub I call it from the form responsible for adding new contacts by Dim C As New Contact C.Display() and it works just fine. However when I try to do something similar using the edit button I get errors - "Unable to cast object of type 'System.String' to type 'AddressBook.Contact'." Dim C As Contact If lstContact.SelectedItem IsNot Nothing Then C = lstContact.SelectedItem() C.Display() End If I think it may be something simple however I wasn't able to fix it and given short time I have I decided to ask for help here. I have updated my class with advice from other members and here is the final version (there are some problems however). When I click on the edit button it displays only the input box for the title of the contact and actually adds another entry in the list with previous data for first name, second name etc. Public Class Contact Public Contact As String Public Fname As String Public Surname As String Public Address As String Private myCont As String Public Property Cont() Get Return myCont End Get Set(ByVal value) myCont = Value End Set End Property Public Overrides Function ToString() As String Return Me.Cont End Function Sub NewContact() FName = frmAddCont.txtFName.ToString frmStart.lstContact.Items.Add(FName) frmAddCont.Hide() End Sub Public Sub Display() Dim C As New Contact C.Cont = InputBox("Enter a title for this contact.") C.Fname = frmAddCont.txtFName.Text C.Surname = frmAddCont.txtSName.Text C.Address = frmAddCont.txtAddress.Text 'frmStart.lstContact.Items.Add(C.Cont.ToString) frmStart.lstContact.Items.Add(C) End Sub End Class

    Read the article

  • Separate specific #ifdef branches

    - by detly
    In short: I want to generate two different source trees from the current one, based only on one preprocessor macro being defined and another being undefined, with no other changes to the source. If you are interested, here is my story... In the beginning, my code was clean. Then we made a new product, and yea, it was better. But the code saw only the same peripheral devices, so we could keep the same code. Well, almost. There was one little condition that needed to be changed, so I added: #if defined(PRODUCT_A) condition = checkCat(); #elif defined(PRODUCT_B) condition = checkCat() && checkHat(); #endif ...to one and only one source file. In the general all-source-files-include-this header file, I had: #if !(defined(PRODUCT_A)||defined(PRODUCT_B)) #error "Don't make me replace you with a small shell script. RTFM." #endif ...so that people couldn't compile it unless they explicitly defined a product type. All was well. Oh... except that modifications were made, components changed, and since the new hardware worked better we could significantly re-write the control systems. Now when I look upon the face of the code, there are more than 60 separate areas delineated by either: #ifdef PRODUCT_A ... #else ... #endif ...or the same, but for PRODUCT_B. Or even: #if defined(PRODUCT_A) ... #elif defined(PRODUCT_B) ... #endif And of course, sometimes sanity took a longer holiday and: #ifdef PRODUCT_A ... #endif #ifdef PRODUCT_B ... #endif These conditions wrap anywhere from one to two hundred lines (you'd think that the last one could be done by switching header files, but the function names need to be the same). This is insane. I would be better off maintaining two separate product-based branches in the source repo and porting any common changes. I realise this now. Is there something that can generate the two different source trees I need, based only on PRODUCT_A being defined and PRODUCT_B being undefined (and vice-versa), without touching anything else (ie. no header inclusion, no macro expansion, etc)?

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • How do I replace values within a data frame with a string in R?

    - by Arturito
    short version: How do I replace values within a data frame with a string found within another data frame? longer version: I'm a biologist working with many species of bees. I have a data set with many thousands of bees. Each row has a unique bee ID # along with all the relevant info about that specimen (data of capture, GPS location, etc). The species information for each bee has not been entered because it takes a long time to ID them. When IDing, I end up with boxes of hundred of bees, all of the same species. I enter these into a separate data frame. I am trying to write code that will update the original data file with species information (family, genus, species, sex, etc) as I ID the bees. Currently, in the original data file, the species info is blank and is interpreted as NA within R. I want to have R find all unique bee ID #'s and fill in the species info, but I am having trouble figuring out how to replace the NA values with a string (e.g. "Andrenidae") Here is a simple example of what I am trying to do: rawData<-data.frame(beeID=c(1:20),family=rep(NA,20)) speciesInfo<-data.frame(beeID=seq(1,20,3),family=rep("Andrenidae",7)) rawData[rawData$beeID == 4,"family"] <- speciesInfo[speciesInfo$beeID == 4,"family"] So, I am replacing things as I want, but with a number rather than the family name (a string). What I would eventually like to do is write a little loop to add in all the species info, e.g.: for (i in speciesInfo$beeID){ rawData[rawData$beeID == i,"family"] <- speciesInfo[speciesInfo$beeID == i,"family"] } Thanks in advance for any advice! Cheers, Zak EDIT: I just noticed that the first two methods below add a new column each time, which would cause problems if I needed to add species info multiple times (which I typically do). For example: rawData<-data.frame(beeID=c(1:20),family=rep(NA,20)) Andrenidae<-data.frame(beeID=seq(1,20,3),family=rep("Andrenidae",7)) Halictidae<-data.frame(beeID=seq(1,20,3)+1,family=rep("Halictidae",7)) # using join library(plyr) rawData <- join(rawData, Andrenidae, by = "beeID", type = "left") rawData <- join(rawData, Halictidae, by = "beeID", type = "left") # using merge rawData <- merge(x=rawData,y=Andrenidae,by='beeID',all.x=T,all.y=F) rawData <- merge(x=rawData,y=Halictidae,by='beeID',all.x=T,all.y=F) Is there a way to either collapse the columns so that I have one, unified data frame? Or a way to update the rawData rather than adding a new column each time? Thanks in advance!

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • Nested Transaction issues within custom Windows Service

    - by pdwetz
    I have a custom Windows Service I recently upgraded to use TransactionScope for nested transactions. It worked fine locally on my old dev machine (XP sp3) and on a test server (Server 2003). However, it fails on my new Windows 7 machine as well as on 2008 Server. It was targeting 2.0 framework; I tried targeting 3.5 instead, but it still fails. The strange part is really in how it fails; no exception is thrown. The service itself merely times out. I added tracing code, and it fails when opening the connection for Database lookup #2 below. I also enabled tracing for System.Transactions; it literally cuts out partway while writing the block for the failed connection. We ran a SQL trace, but only the first lookup shows up. I put in code traces, and it gets to the trace the line before the second lookup, but nothing after. I've had the same experience hitting two different SQL servers (both are SQL 2005 running on Server 2003). The connection string is utilizing a SQL account (not Windows integration). All connections are against the same database in this case, but given the nature of the code it is being escalated to MSDTC. Here's the basic code structure: TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew, options)) { // Database lookup #1 TransactionOptions options = new TransactionOptions(); options.IsolationLevel = Transaction.Current != null ? Transaction.Current.IsolationLevel : System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Database lookup #2; fails on connection.Open() // Database save (never reached) scope.Complete();<br/> } scope.Complete();<br/> } My local firewall is disabled. The service normally runs using Network Service, but I also tried my user account (same results). The short of it is that I use the same general technique widely in my web applications and haven't had any issues. I pulled out the code and ran it fine within a local Windows Form application. If anyone has any additional debugging ideas (or, even better, solutions) I'd love to hear them.

    Read the article

  • Ideas for designing an automated content tagging system needed

    - by Benjamin Smith
    I am currently designing a website that amongst other is required to display and organise small amounts of text content (mainly quotes, article stubs, etc.). I currently have a database with 250,000+ items and need to come up with a method of tagging each item with relevant tags which will eventually allow for easy searching/browsing of the content for users. A very simplistic idea I have (and one that I believe is employed by some sites that I have been looking to for inspiration (http://www.brainyquote.com/quotes/topics.html)), is to simply search the database for certain words or phrases and use these words as tags for the content. This can easily be extended so that if for example a user wanted to show all items with a theme of love then I would just return a list of items with words and phrases relating to this theme. This would not be hard to implement but does not provide very good results. For example if I were to search for the month 'May' in the database with the aim of then classifying the items returned as realting to the topic of Spring then I would get back all occurrences of the word May, regardless of the semantic meaning. Another shortcoming of this method is that I believe it would be quite hard to automate the process to any large scale. What I really require is a library that can take an item, break it down and analyse the semantic meaning and also return a list of tags that would correctly classify the item. I know this is a lot to ask and I have a feeling I will end up reverting to the aforementioned method but I just thought I should ask if anyone knew of any pre-existing solution. I think that as the items in the database are short then it is probably quite a hard task to analyse any meaning from them however I may be mistaken. Another path to possibly go down would be to use something like amazon turk to outsource the task which may produce good results but would be expensive. Eventually I would like users to be able to (and want to!) tag content and to vote for the most relevant tags, possibly using a gameification mechanic as motivation however this is some way down the line. A temporary fix may be the best thing if this were the route I decided to go down as I could use the rough results I got as the starting point for a more in depth solution. If you've read this far, thanks for sticking with me, I know I'm spitballing but any input would be really helpful. Thanks.

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Assembly Load and loading the "sub-modules" dependencies - "cannot fild the file specified"

    - by Ted
    There are several questions out there that ask the same question. However the answers they received I cannot understand, so here goes: Similar questions: http://stackoverflow.com/questions/1874277/dynamically-load-assembly-and-manually-force-path-to-get-referenced-assemblies ; http://stackoverflow.com/questions/22012/loading-assemblies-and-its-dependencies-closed The question in short: I need to figure out how dependencies, ie References in my modules can be loaded dynamically. Right now I am getting "The system cannot find the file specified" on Assemblies referenced in my so called modules. I cannot really get how to use the AssemblyResolve event... The longer version I have one application, MODULECONTROLLER, that loads separate modules. These "separate modules" are located in well-known subdirectories, like appBinDir\Modules\Module1 appBinDir\Modules\Module2 Each directory contains all the DLLs that exists in the bin-directory of those projects after a build. So the MODULECONTROLLER loads all the DLLs contained in those folders using this code: byte[] bytes = File.ReadAllBytes(dllFileFullPath); Assembly assembly = null; assembly = Assembly.Load(bytes); I am, as you can see, loading the byte[]-array (so I dont lock the DLL-files). Now, in for example MODULE1, I have a static reference called MyGreatXmlProtocol. The MyGreatXmlProtocol.dll then also exists in the directory appBinDir\Modules\Module1 and is loaded using the above code When code in the MODULE1 tries to use this MyGreatXmlProtocol, I get: Could not load file or assembly 'MyGreatXmlProtocol, Version=1.0.3797.26527, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. So, in a post (like this one) they say that To my understanding reflection will load the main assembly and then search the GAC for the referenced assemblies, if it cannot find it there, you can then incorparate an assemblyResolve event: First; is it really needed to use the AssemblyResolve-event to make this work? Shouldnt my different MODULEs themself load their DLLs, as they are statically referenced? Second; if AssemblyResolve is the way to go - how do I use it? I have attached a handler to the Event but I never get anything on MyGreatXmlProctol... === EDIT === CODE regarding the AssemblyResolve-event handler: public GUI() { InitializeComponent(); AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); ... } // Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { Console.WriteLine(args.Name); return null; } Hope I wasnt too fuzzy =) Thx

    Read the article

  • Instance caching in Objective C

    - by zoul
    Hello! I want to cache the instances of a certain class. The class keeps a dictionary of all its instances and when somebody requests a new instance, the class tries to satisfy the request from the cache first. There is a small problem with memory management though: The dictionary cache retains the inserted objects, so that they never get deallocated. I do want them to get deallocated, so that I had to overload the release method and when the retain count drops to one, I can remove the instance from cache and let it get deallocated. This works, but I am not comfortable mucking around the release method and find the solution overly complicated. I thought I could use some hashing class that does not retain the objects it stores. Is there such? The idea is that when the last user of a certain instance releases it, the instance would automatically disappear from the cache. NSHashTable seems to be what I am looking for, but the documentation talks about “supporting weak relationships in a garbage-collected environment.” Does it also work without garbage collection? Clarification: I cannot afford to keep the instances in memory unless somebody really needs them, that is why I want to purge the instance from the cache when the last “real” user releases it. Better solution: This was on the iPhone, I wanted to cache some textures and on the other hand I wanted to free them from memory as soon as the last real holder released them. The easier way to code this is through another class (let’s call it TextureManager). This class manages the texture instances and caches them, so that subsequent calls for texture with the same name are served from the cache. There is no need to purge the cache immediately as the last user releases the texture. We can simply keep the texture cached in memory and when the device gets short on memory, we receive the low memory warning and can purge the cache. This is a better solution, because the caching stuff does not pollute the Texture class, we do not have to mess with release and there is even a higher chance for cache hits. The TextureManager can be abstracted into a ResourceManager, so that it can cache other data, not only textures.

    Read the article

  • Generate lags R

    - by Btibert3
    Hi All, I hope this is basic; just need a nudge in the right direction. I have read in a database table from MS Access into a data frame using RODBC. Here is a basic structure of what I read in: PRODID PROD Year Week QTY SALES INVOICE Here is the structure: str(data) 'data.frame': 8270 obs. of 7 variables: $ PRODID : int 20001 20001 20001 100001 100001 100001 100001 100001 100001 100001 ... $ PROD : Factor w/ 1239 levels "1% 20qt Box",..: 335 335 335 128 128 128 128 128 128 128 ... $ Year : int 2010 2010 2010 2009 2009 2009 2009 2009 2009 2010 ... $ Week : int 12 18 19 14 15 16 17 18 19 9 ... $ QTY : num 1 1 0 135 300 270 300 270 315 315 ... $ SALES : num 15.5 0 -13.9 243 540 ... $ INVOICES: num 1 1 2 5 11 11 10 11 11 12 ... Here are the top few rows: head(data, n=10) PRODID PROD Year Week QTY SALES INVOICES 1 20001 Dolie 12" 2010 12 1 15.46 1 2 20001 Dolie 12" 2010 18 1 0.00 1 3 20001 Dolie 12" 2010 19 0 -13.88 2 4 100001 Cage Free Eggs 2009 14 135 243.00 5 5 100001 Cage Free Eggs 2009 15 300 540.00 11 6 100001 Cage Free Eggs 2009 16 270 486.00 11 7 100001 Cage Free Eggs 2009 17 300 540.00 10 8 100001 Cage Free Eggs 2009 18 270 486.00 11 9 100001 Cage Free Eggs 2009 19 315 567.00 11 10 100001 Cage Free Eggs 2010 9 315 569.25 12 I simply want to generate lags for QTY, SALES, INVOICE for each product but I am not sure where to start. I know R is great with Time Series, but I am not sure where to start. I have two questions: 1- I have the raw invoice data but have aggregated it for reporting purposes. Would it be easier if I didn't aggregate the data? 2- Regardless of aggregation or not, what functions will I need to loop over each product and generate the lags as I need them? In short, I want to loop over a set of records, calculate lags for a product (if possible), append the lags (as they apply) to the current record for each product, and write the results back to a table in my database for my reporting software to use. Any help you can provide will be greatly appreciated! Many thanks in advance, Brock

    Read the article

  • What would you do to make this code more "over-engineered"? [closed]

    - by Mez
    A friend and I got bored, and, long story short, decided to make an over-engineered FizzBuzz in PHP <?php interface INumber { public function go(); public function setNumber($i); } class FBNumber implements INumber { private $value; private $fizz; private $buzz; public function __construct($fizz = 3 , $buzz = 5) { $this->setFizz($fizz); $this->setBuzz($buzz); } public function setNumber($i) { if(is_int($i)) { $this->value = $i; } } private function setFizz($i) { if(is_int($i)) { $this->fizz = $i; } } private function setBuzz($i) { if(is_int($i)) { $this->buzz = $i; } } private function isFizz() { return ($this->value % $this->fizz == 0); } private function isBuzz() { return ($this->value % $this->buzz == 0); } private function isNeither() { return (!$this->isBuzz() AND !$this->isFizz()); } private function isFizzBuzz() { return ($this->isFizz() OR $this->isBuzz()); } private function fizz() { if ($this->isFizz()) { return "Fizz"; } } private function buzz() { if ($this->isBuzz()) { return "Buzz"; } } private function number() { if ($this->isNeither()) { return $this->value; } } public function go() { return $this->fizz() . $this->buzz() . $this->number(); } } class FizzBuzz { private $limit; private $number_class; private $numbers = array(); function __construct(INumber $number_class, $limit = 100) { $this->number_class = $number_class; $this->limit = $limit; } private function collectNumbers() { for ($i=1; $i <= $this->limit; $i++) { $n = clone($this->number_class); $n->setNumber($i); $this->numbers[$i] = $n->go(); unset($n); } } private function printNumbers() { $return = ''; foreach($this->numbers as $number){ $return .= $number . "\n"; } return $return; } public function go() { $this->collectNumbers(); return $this->printNumbers(); } } $fb = new FizzBuzz(new FBNumber()); echo $fb->go(); In theory, what could we/would you do to make it even more "over-engineered"?

    Read the article

  • In what circumstances are instance variables declared as '_var' in 'use fields' private?

    - by Pedro Silva
    I'm trying to understand the behavior of the fields pragma, which I find poorly documented, regarding fields prefixed with underscores. This is what the documentation has to say about it: Field names that start with an underscore character are made private to the class and are not visible to subclasses. Inherited fields can be overridden but will generate a warning if used together with the -w switch. This is not consistent with its actual behavior, according to my test, below. Not only are _-prefixed fields visible within a subclass, they are visible within foreign classes as well (unless I don't get what 'visible' means). Also, directly accessing the restricted hash works fine. Where can I find more about the behavior of the fields pragma, short of going at the source code? { package Foo; use strict; use warnings; use fields qw/a _b __c/; sub new { my ( $class ) = @_; my Foo $self = fields::new($class); $self->a = 1; $self->b = 2; $self->c = 3; return $self; } sub a : lvalue { shift->{a} } sub b : lvalue { shift->{_b} } sub c : lvalue { shift->{__c} } } { package Bar; use base 'Foo'; use strict; use warnings; use Data::Dumper; my $o = Bar->new; print Dumper $o; ##$VAR1 = bless({'_b' => 2, '__c' => 3, 'a' => 1}, 'Foo'); $o->a = 4; $o->b = 5; $o->c = 6; print Dumper $o; ##$VAR1 = bless({'_b' => 5, '__c' => 6, 'a' => 4}, 'Foo'); $o->{a} = 7; $o->{_b} = 8; $o->{__c} = 9; print Dumper $o; ##$VAR1 = bless({'_b' => 8, '__c' => 9, 'a' => 7}, 'Foo'); }

    Read the article

  • Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)

    - by Nailuj
    I'm developing an application (winforms C# .NET 4.0) where I access a lookup functionality from a 3rd party through a simple HTTP request. I call an url with a parameter, and in return I get a small string with the result of the lookup. Simple enough. The challenge is however, that I have to do lots of these lookups (a couple of thousands), and I would like to limit the time needed. Therefore I would like to run requests in parallel (say 10-20). I use a ThreadPool to do this, and the short version of my code looks like this: public void startAsyncLookup(Action<LookupResult> returnLookupResult) { this.returnLookupResult = returnLookupResult; foreach (string number in numbersToLookup) { ThreadPool.QueueUserWorkItem(lookupNumber, number); } } public void lookupNumber(Object threadContext) { string numberToLookup = (string)threadContext; string url = @"http://some.url.com/?number=" + numberToLookup; WebClient webClient = new WebClient(); Stream responseData = webClient.OpenRead(url); LookupResult lookupResult = parseLookupResult(responseData); returnLookupResult(lookupResult); } I fill up numbersToLookup (a List<String>) from another place, call startAsyncLookup and provide it with a call-back function returnLookupResult to return each result. This works, but I found that I'm not getting the throughput I want. Initially I thought it might be the 3rd party having a poor system on their end, but I excluded this by trying to run the same code from two different machines at the same time. Each of the two took as long as one did alone, so I could rule out that one. A colleague then tipped me that this might be a limitation in Windows. I googled a bit, and found amongst others this post saying that by default Windows limits the number of simultaneous request to the same web server to 4 for HTTP 1.0 and to 2 for HTTP 1.1 (for HTTP 1.1 this is actually according to the specification (RFC2068)). The same post referred to above also provided a way to increase these limits. By adding two registry values to [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] (MaxConnectionsPerServer and MaxConnectionsPer1_0Server), I could control this myself. So, I tried this (sat both to 20), restarted my computer, and tried to run my program again. Sadly though, it didn't seem to help any. I also kept an eye on the Resource Monitor (see screen shot) while running my batch lookup, and I noticed that my application (the one with the title blacked out) still only was using two TCP connections. So, the question is, why isn't this working? Is the post I linked to using the wrong registry values? Is this perhaps not possible to "hack" in Windows any longer (I'm on Windows 7)? Any ideas would be highly appreciated :) And just in case anyone should wonder, I have also tried with different settings for MaxThreads on ThreadPool (everyting from 10 to 100), and this didn't seem to affect my throughput at all, so the problem shouldn't be there either.

    Read the article

  • OpenGL Shader Compile Error

    - by Tomas Cokis
    I'm having a bit of a problem with my code for compiling shaders, namely they both register as failed compiles and no log is received. This is the shader compiling code: /* Make the shader */ Uint size; GLchar* file; loadFileRaw(filePath, file, &size); const char * pFile = file; const GLint pSize = size; newCashe.shader = glCreateShader(shaderType); glShaderSource(newCashe.shader, 1, &pFile, &pSize); glCompileShader(newCashe.shader); GLint shaderCompiled; glGetShaderiv(newCashe.shader, GL_COMPILE_STATUS, &shaderCompiled); if(shaderCompiled == GL_FALSE) { ReportFiler->makeReport("ShaderCasher.cpp", "loadShader()", "Shader did not compile", "The shader " + filePath + " failed to compile, reporting the error - " + OpenGLServices::getShaderLog(newCashe.shader)); } And these are the support functions: bool loadFileRaw(string fileName, char* data, Uint* size) { if (fileName != "") { FILE *file = fopen(fileName.c_str(), "rt"); if (file != NULL) { fseek(file, 0, SEEK_END); *size = ftell(file); rewind(file); if (*size > 0) { data = (char*)malloc(sizeof(char) * (*size + 1)); *size = fread(data, sizeof(char), *size, file); data[*size] = '\0'; } fclose(file); } } return data; } string OpenGLServices::getShaderLog(GLuint obj) { int infologLength = 0; int charsWritten = 0; char *infoLog; glGetShaderiv(obj, GL_INFO_LOG_LENGTH,&infologLength); if (infologLength > 0) { infoLog = (char *)malloc(infologLength); glGetShaderInfoLog(obj, infologLength, &charsWritten, infoLog); string log = infoLog; free(infoLog); return log; } return "<Blank Log>"; } and the shaders I'm loading: void main(void) { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } void main(void) { gl_Position = ftransform(); } In short I get From: ShaderCasher.cpp, In: loadShader(), Subject: Shader did not compile Message: The shader Data/Shaders/Standard/standard.vs failed to compile, reporting the error - <Blank Log> for every shader I compile I've tried replacing the file reading with just a hard coded string but I get the same error so there must be something wrong with how I'm compiling them. I have run and compiled example programs with shaders, so I doubt my drivers are the issue, but in any case I'm on a Nvidia 8600m GT. Can anyone help?

    Read the article

  • Stored procedure performance randomly plummets; trivial ALTER fixes it. Why?

    - by gWiz
    I have a couple of stored procedures on SQL Server 2005 that I've noticed will suddenly take a significantly long time to complete when invoked from my ASP.NET MVC app running in an IIS6 web farm of four servers. Normal, expected completion time is less than a second; unexpected anomalous completion time is 25-45 seconds. The problem doesn't seem to ever correct itself. However, if I ALTER the stored procedure (even if I don't change anything in the procedure, except to perhaps add a space to the script created by SSMS Modify command), the completion time reverts to expected completion time. IIS and SQL Server are running on separate boxes, both running Windows Server 2003 R2 Enterprise Edition. SQL Server is Standard Edition. All machines have dual Xeon E5450 3GHz CPUs and 4GB RAM. SQL Server is accessed using its TCP/IP protocol over gigabit ethernet (not sure what physical medium). The problem is present from all web servers in the web farm. When I invoke the procedure from a query window in SSMS on my development machine, the procedure completes in normal time. This is strange because I was under the impression that SSMS used the same SqlClient driver as in .NET. When I point my development instance of the web app to the production database, I again get the anomalous long completion time. If my SqlCommand Timeout is too short, I get System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Question: Why would performing ALTER on the stored procedure, without actually changing anything in it, restore the completion time to less than a second, as expected? Edit: To clarify, when the procedure is running slow for the app, it simultaneously runs fine in SSMS with the same parameters. The only difference I can discern is login credentials (next time I notice the behavior, I'll be checking from SSMS with the same creds). The ultimate goal is to get the procs to sustainably run with expected speed without requiring occasional intervention. Resolution: I wanted to to update this question in case others are experiencing this issue. Following the leads of the answers below, I was able to consistently reproduce this behavior. In order to test, I utilize sp_recompile and pass it one of the susceptible sprocs. I then initiate a website request from my browser that will invoke the sproc with atypical parameters. Lastly, I initiate a website request to a page that invokes the sproc with typical parameters, and observe that the request does not complete because of a SQL timeout on the sproc invocation. To resolve this on SQL Server 2005, I've added OPTIMIZE FOR hints to my SELECT. The sprocs that were vulnerable all have the "all-in-one" pattern described in this article. This pattern is certainly not ideal but was a necessary trade-off given the timeframe for the project.

    Read the article

  • How do I find the Next Closest Date to today from a list of dates in a Plist on iOS?

    - by user1173823
    Situation: In short, I have a football schedule. I would like to use a custom cell which provides more info for only the next game date in the schedule. Issue: How do I find only the next closest game in the schedule (for iOS)? I've watched the WWDC 2013 video for "Solutions to Common Date and Time Issues" however this primarily applies to the Mac. I've searched numerous posts here and some are close but not what I need to find ONLY the next date from my list of dates in the schedule. From other posts I see where I can compare two specific dates, but this is not what I want to do. I want to find the next closest date that is equal to or after today from a list of dates. This is where I am now. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { //Populate the table from the plist NSDictionary *season = _schedContentArray[indexPath.section]; NSArray *schedule = season[@"Schedule"]; NSDictionary *game = schedule[indexPath.row]; //find the closest game date after today's date ?? NSString *gameDateStr = game[@"GameDate"]; NSCalendar *calendar = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; dateFormatter.calendar=calendar; [dateFormatter setDateFormat:@"MM/dd/yy"]; NSDate *today = [NSDate date]; NSDate *gameDate = [dateFormatter dateFromString:gameDateStr]; //NSString *nextGame = NSLog(@"game date is %@",gameDate); The NSLog returns the game dates (except for the open date): 2013-11-11 16:10:05.979 Clemson Football[24060:70b] game date is 2013-08-31 04:00:00 +0000 2013-11-11 16:10:05.982 Clemson Football[24060:70b] game date is 2013-09-07 04:00:00 +0000 2013-11-11 16:10:05.985 Clemson Football[24060:70b] game date is (null) 2013-11-11 16:10:05.987 Clemson Football[24060:70b] game date is 2013-09-19 04:00:00 +0000 2013-11-11 16:10:05.988 Clemson Football[24060:70b] game date is 2013-09-28 04:00:00 +0000 2013-11-11 16:10:05.990 Clemson Football[24060:70b] game date is 2013-10-05 04:00:00 +0000 2013-11-11 16:10:05.992 Clemson Football[24060:70b] game date is 2013-10-12 04:00:00 +0000 2013-11-11 16:10:05.993 Clemson Football[24060:70b] game date is 2013-10-19 04:00:00 +0000 2013-11-11 16:10:05.995 Clemson Football[24060:70b] game date is 2013-10-26 04:00:00 +0000 2013-11-11 16:10:05.996 Clemson Football[24060:70b] game date is 2013-11-02 04:00:00 +0000 2013-11-11 16:10:05.998 Clemson Football[24060:70b] game date is 2013-11-09 05:00:00 +0000 2013-11-11 16:10:06.000 Clemson Football[24060:70b] game date is 2013-11-14 05:00:00 +0000 2013-11-11 16:10:06.001 Clemson Football[24060:70b] game date is 2013-11-23 05:00:00 +0000 2013-11-11 16:10:06.003 Clemson Football[24060:70b] game date is 2013-11-30 05:00:00 +0000 2013-11-11 16:10:06.005 Clemson Football[24060:70b] game date is 2013-12-07 05:00:00 +0000 Thanks in advance for any assistance you can provide. This seems like it should be simple but has been fairly frustrating. Let me know if you need additional info.

    Read the article

  • Marshalling non-Blittable Structs from C# to C++

    - by Greggo
    I'm in the process of rewriting an overengineered and unmaintainable chunk of my company's library code that interfaces between C# and C++. I've started looking into P/Invoke, but it seems like there's not much in the way of accessible help. We're passing a struct that contains various parameters and settings down to unmanaged codes, so we're defining identical structs. We don't need to change any of those parameters on the C++ side, but we do need to access them after the P/Invoked function has returned. My questions are: What is the best way to pass strings? Some are short (device id's which can be set by us), and some are file paths (which may contain Asian characters) Should I pass an IntPtr to the C# struct or should I just let the Marshaller take care of it by putting the struct type in the function signature? Should I be worried about any non-pointer datatypes like bools or enums (in other, related structs)? We have the treat warnings as errors flag set in C++ so we can't use the Microsoft extension for enums to force a datatype. Is P/Invoke actually the way to go? There was some Microsoft documentation about Implicit P/Invoke that said it was more type-safe and performant. For reference, here is one of the pairs of structs I've written so far: C++ /** Struct used for marshalling Scan parameters from managed to unmanaged code. */ struct ScanParameters { LPSTR deviceID; LPSTR spdClock; LPSTR spdStartTrigger; double spinRpm; double startRadius; double endRadius; double trackSpacing; UINT64 numTracks; UINT32 nominalSampleCount; double gainLimit; double sampleRate; double scanHeight; LPWSTR qmoPath; //includes filename LPWSTR qzpPath; //includes filename }; C# /// <summary> /// Struct used for marshalling scan parameters between managed and unmanaged code. /// </summary> [StructLayout(LayoutKind.Sequential)] public struct ScanParameters { [MarshalAs(UnmanagedType.LPStr)] public string deviceID; [MarshalAs(UnmanagedType.LPStr)] public string spdClock; [MarshalAs(UnmanagedType.LPStr)] public string spdStartTrigger; public Double spinRpm; public Double startRadius; public Double endRadius; public Double trackSpacing; public UInt64 numTracks; public UInt32 nominalSampleCount; public Double gainLimit; public Double sampleRate; public Double scanHeight; [MarshalAs(UnmanagedType.LPWStr)] public string qmoPath; [MarshalAs(UnmanagedType.LPWStr)] public string qzpPath; }

    Read the article

  • Permanent mutex locking causing deadlock?

    - by Daniel
    I am having a problem with mutexes (pthread_mutex on Linux) where if a thread locks a mutex right again after unlocking it, another thread is not very successful getting a lock. I've attached test code where one mutex is created, along with two threads that in an endless loop lock the mutex, sleep for a while and unlock it again. The output I expect to see is "alive" messages from both threads, one from each (e.g. 121212121212. However what I get is that one threads gets the majority of locks (e.g. 111111222222222111111111 or just 1111111111111...). If I add a usleep(1) after the unlocking, everything works as expected. Apparently when the thread goes to SLEEP the other thread gets its lock - however this is not the way I was expecting it, as the other thread has already called pthread_mutex_lock. I suspect this is the way this is implemented, in that the actice thread has priority, however it causes certain problem in this particular testcase. Is there any way to prevent it (short of adding a deliberately large enough delay or some kind of signaling) or where is my error in understanding? #include <pthread.h> #include <stdio.h> #include <string.h> #include <sys/time.h> #include <unistd.h> pthread_mutex_t mutex; void* threadFunction(void *id) { int count=0; while(true) { pthread_mutex_lock(&mutex); usleep(50*1000); pthread_mutex_unlock(&mutex); // usleep(1); ++count; if (count % 10 == 0) { printf("Thread %d alive\n", *(int*)id); count = 0; } } return 0; } int main() { // create one mutex pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutex_init(&mutex, &attr); // create two threads pthread_t thread1; pthread_t thread2; pthread_attr_t attributes; pthread_attr_init(&attributes); int id1 = 1, id2 = 2; pthread_create(&thread1, &attributes, &threadFunction, &id1); pthread_create(&thread2, &attributes, &threadFunction, &id2); pthread_attr_destroy(&attributes); sleep(1000); return 0; }

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • Converting generic type to it's base and vice-versa

    - by Pajci
    Can someone help me with the conversion I am facing in enclosed code ... I commented the lines of code, where I am having problem. Is this even the right way to achieve this ... what I am trying to do, is forward responses of specified type to provided callback. public class MessageBinder { private class Subscriber<T> : IEquatable<Subscriber<T>> where T : Response { ... } private readonly Dictionary<Type, List<Subscriber<Response>>> bindings; public MessageBinder() { this.bindings = new Dictionary<Type, List<Subscriber<Response>>>(); } public void Bind<TResponse>(short shortAddress, Action<ZigbeeAsyncResponse<TResponse>> callback) where TResponse : Response { List<Subscriber<TResponse>> subscribers = this.GetSubscribers<TResponse>(); if (subscribers != null) { subscribers.Add(new Subscriber<TResponse>(shortAddress, callback)); } else { var subscriber = new Subscriber<TResponse>(shortAddress, callback); // ERROR: cannot convert from 'List<Subscriber<TResponse>>' to 'List<Subscriber<Response>>' ... tried LINQ Cast operator - does not work either this.bindings.Add(typeof(TResponse), new List<Subscriber<TResponse>> { subscriber }); } } public void Forward<TResponse>(TResponse response) where TResponse : Response { var subscribers = this.GetSubscribers<TResponse>(); if (subscribers != null) { Subscriber<TResponse> subscriber; Type responseType = typeof (TResponse); if (responseType.IsSubclassOf(typeof (AFResponse))) { // ERROR: Cannot convert type 'TResponse' to 'AFResponse' ... tried cast to object first, works, but is this the right way? var afResponse = (AFResponse)response; subscriber = subscribers.SingleOrDefault(s => s.ShortAddress == afResponse.ShortAddress); } else { subscriber = subscribers.First(); } if (subscriber != null) { subscriber.Forward(response); } } } private List<Subscriber<TResponse>> GetSubscribers<TResponse>() where TResponse : Response { List<Subscriber<Response>> subscribers; this.bindings.TryGetValue(typeof(TResponse), out subscribers); // ERROR: How can I cast List<Subscriber<Response>> to List<Subscriber<TResponse>>? return subscribers; } } Thank you for any help :)

    Read the article

  • div popup inside td

    - by sims
    I have a table with a bunch of cells. (No way! Amazing! :P) Some of the cells have a small div that when you put your mouse over, it gets bigger so you can read all the text. This works well and all. However, since html elements that come later in the document have a higher z-index, when the div gets bigger it is underneath the other divs in the other cells. Some html code: <table> <tr> <td> limited info <div style="position: relative;"> <div style="position: absolute; top: 0; left: 0; width: 1em; height: 1em;" onmouseover="tooltipshow(this)" onmouseout="tooltiphide(this)"> informative long text is here </div> </div> </td> <td> some short info <div style="position: relative;"> <div style="position: absolute; top: 0; left: 0; width: 1em; height: 1em;" onmouseover="tooltipshow(this)" onmouseout="tooltiphide(this)"> longer explanation about what is really going on that covers the div up there ^^^. darn! </div> </div> </td> </tr> </table> Some js code: function tooltipshow(obj) { obj.style.width = '30em'; obj.style.zIndex = '100'; } function tooltiphide(obj) { obj.style.width = '1em'; obj.style.zIndex = '20'; } It doesn't matter if I set z-index dynamically to something higher onmouseover. It's like z-index has no affect. I think it has something to do with the table. I've tested this in FF3. When I'm feeling particularly macho, I'll test it in IE.

    Read the article

  • In what circumstances are instance variables declared as '_var' in 'use fields' readonly?

    - by Pedro Silva
    I'm trying to understand the behavior of the fields pragma, which I find poorly documented, regarding fields prefixed with underscores. This is what the documentation has to say about it: Field names that start with an underscore character are made private to the class and are not visible to subclasses. Inherited fields can be overridden but will generate a warning if used together with the -w switch. This is not consistent with its actual behavior, according to my test, below. Not only are _-prefixed fields visible within a subclass, they are visible within foreign classes as well (unless I don't get what 'visible' means). Also, directly accessing the restricted hash works fine. Where can I find more about the behavior of the fields pragma, short of going at the source code? { package Foo; use strict; use warnings; use fields qw/a _b __c/; sub new { my ( $class ) = @_; my Foo $self = fields::new($class); $self->a = 1; $self->b = 2; $self->c = 3; return $self; } sub a : lvalue { shift->{a} } sub b : lvalue { shift->{_b} } sub c : lvalue { shift->{__c} } } { package Bar; use base 'Foo'; use strict; use warnings; use Data::Dumper; my $o = Bar->new; print Dumper $o; ##$VAR1 = bless({'_b' => 2, '__c' => 3, 'a' => 1}, 'Foo'); $o->a = 4; $o->b = 5; $o->c = 6; print Dumper $o; ##$VAR1 = bless({'_b' => 5, '__c' => 6, 'a' => 4}, 'Foo'); $o->{a} = 7; $o->{_b} = 8; $o->{__c} = 9; print Dumper $o; ##$VAR1 = bless({'_b' => 8, '__c' => 9, 'a' => 7}, 'Foo'); }

    Read the article

  • jQuery reports incorrect element height in Firefox iframe

    - by Augustus
    Here a short test to demonstrate my problem. I have a page that loads an iframe: <html> <head> <title></title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.js"></script> </head> <body> <iframe id="iframe" src="box.html" style="width: 100px; height: 100px"></iframe> <script> $('#iframe').bind('load', function () { var div = $(this).contents().find('div'); alert(div.height()); alert(div.innerHeight()); alert(div.outerHeight()); alert(div.outerHeight(true)); }); </script> </body> </html> The iframe (box.html) contains a single styled div: <html> <head> <title></title> <style> div { height: 50px; width: 50px; margin: 5px; padding: 5px; border: 2px solid #00f; background-color: #f00; } </style> </head> <body> <div></div> </body> </html> The four alerts should return 50, 60, 64 and 74, respectively. This works as expected in Safari and Chrome. In FF 3.5.1, they all return 64. This is wrong. Does anyone know how I can force FF/jQuery to return the correct values?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >