Search Results

Search found 13344 results on 534 pages for 'anonymous inner classes'.

Page 454/534 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Is there an ORM that supports composition w/o Joins

    - by Ken Downs
    EDIT: Changed title from "inheritance" to "composition". Left body of question unchanged. I'm curious if there is an ORM tool that supports inheritance w/o creating separate tables that have to be joined. Simple example. Assume a table of customers, with a Bill-to address, and a table of vendors, with a remit-to address. Keep it simple and assume one address each, not a child table of addresses for each. These addresses will have a handful of values in common: address 1, address 2, city, state/province, postal code. So let's say I'd have a class "addressBlock" and I want the customers and vendors to inherit from this class, and possibly from other classes. But I do not want separate tables that have to be joined, I want the columns in the customer and vendor tables respectively. Is there an ORM that supports this? The closest question I have found on StackOverflow that might be the same question is linked below, but I can't quite figure if the OP is asking what I am asking. He seems to be asking about foregoing inheritance precisely because there will be multiple tables. I'm looking for the case where you can use inheritance w/o generating the multiple tables. Model inheritance approach with Django's ORM

    Read the article

  • Python - Checking for membership inside nested dict

    - by victorhooi
    heya, This is a followup questions to this one: http://stackoverflow.com/questions/2901422/python-dictreader-skipping-rows-with-missing-columns Turns out I was being silly, and using the wrong ID field. I'm using Python 3.x here. I have a dict of employees, indexed by a string, "directory_id". Each value is a nested dict with employee attributes (phone number, surname etc.). One of these values is a secondary ID, say "internal_id", and another is their manager, call it "manager_internal_id". The "internal_id" field is non-mandatory, and not every employee has one. (I've simplified the fields a little, both to make it easier to read, and also for privacy/compliance reasons). The issue here is that we index (key) each employee by their directory_id, but when we lookup their manager, we need to find managers by their "internal_id". Before, when employee.keys() was a list of internal_ids, I was using a membership check on this. Now, the last part of my if statement won't work, since the internal_ids is part of the dict values, instead of the key itself. def lookup_supervisor(manager_internal_id, employees): if manager_internal_idis not None and manager_internal_id!= "" and manager_internal_id in employees.keys(): return (employees[manager_internal_id]['mail'], employees[manager_internal_id]['givenName'], employees[manager_internal_id]['sn']) else: return ('Supervisor Not Found', 'Supervisor Not Found', 'Supervisor Not Found') So the first question is, how do I check whether the manager_internal_id is present in the dict's values. I've tried substituting employee.keys() with employee.values(), that didn't work. Also, I'm hoping for something a little more efficient, not sure if there's a way to get a subset of the values, specifically, all the entries for employees[directory_id]['internal_id']. Hopefully there's some Pythonic way of doing this, without using a massive heap of nested for/if loops. My second question is, how do I then cleanly return the required employee attributes (mail, givenname, surname etc.). My for loop is iterating over each employee, and calling lookup_supervisor. I'm feeling a bit stupid/stumped here. def tidy_data(employees): for directory_id, data in employees.items(): # We really shouldnt' be passing employees back and forth like this - hmm, classes? data['SupervisorEmail'], data['SupervisorFirstName'], data['SupervisorSurname'] = lookup_supervisor(data['manager_internal_id'], employees) Thanks in advance =), Victor

    Read the article

  • WCF service reference namespace differs from original

    - by Thorarin
    I'm having a problem regarding namespaces used by my service references. I have a number of WCF services, say with the namespace MyCompany.Services.MyProduct (the actual namespaces are longer). As part of the product, I'm also providing a sample C# .NET website. This web application uses the namespace MyCompany.MyProduct. During initial development, the service was added as a project reference to the website and uses directly. I used a factory pattern that returns an object instance that implements MyCompany.Services.MyProduct.IMyService. So far, so good. Now I want to change this to use an actual service reference. After adding the reference and typing MyCompany.Services.MyProduct in the namespace textbox, it generates classes in the namespace MyCompany.MyProduct.MyCompany.Services.MyProduct. BAD! I don't want to have to change using directives in several places just because I'm using a proxy class. So I tried prepending the namespace with global::, but that is not accepted. Note that I hadn't even deleted the original assembly references yet, and "reuse types" is enabled, but no reusing was done, apparently. However, I don't want to keep the assembly references around in my sample website for it to work anyway. The only solution I've come up with so far is setting the default namespace for my web application to MyCompany (because it cannot be empty), and adding the service reference as Services.MyProduct. Suppose that a customer wants to use my sample website as a starting point, and they change the default namespace to OtherCompany.Whatever, this will obviously break my workaround. Is there a good solution to this problem? To summarize: I want to generate a service reference proxy in the original namespace, without referencing the assembly. Note: I have seen this question, but there was no solution provided that is acceptable for my use case. Edit: As John Saunders suggested, I've submitted some feedback to Microsoft about this: Feedback item @ Microsoft Connect

    Read the article

  • java - powermock whenNew doesnt seem to work, calls the actual constructor

    - by user1331243
    I have two final classes that are used in my unit test. I am trying to use whenNew on the constructor of a final class, but I see that it calls the actual constructor. The code is @PrepareForTest({A.class, B.class, Provider.class}) @Test public void testGetStatus() throws Exception { B b = mock(B.class); when(b.getStatus()).thenReturn(1); whenNew(B.class).withArguments(anyString()).thenReturn(b); Provider p = new Provider(); int val = p.getStatus(); assertTrue((val == 1)); } public class Provider { public int getStatus() { B b = new B("test"); return b.getStatus(); } } public final class A { private void init() { // ...do soemthing } private static A a; private A() { } public static A getInstance() { if (a == null) { a = new A(); a.init(); } return a; } } public final class B { public B() { } public B(String s) { this(A.getInstance(), s); } public B(A a, String s) { } public int getStatus() { return 0; } } On debug, I find that its the actual class B instance created and not the mock instance that is returned for new usage and assertion fails. Any pointers on how to get this working. Thanks

    Read the article

  • Problems with rails and saving to the database.

    - by Grantismo
    I've been having some difficulty in understanding the source of a problem. Below is a listing of the model classes. Essentially the goal is to have the ability to add sentences to the end of the story, or to add stories to an existing sentence_block. Right now, I'm only attempting to allow users to add sentences, and automatically create a new sentence_block for the new sentence. class Story < ActiveRecord::Base has_many :sentence_blocks, :dependent => :destroy has_many :sentences, :through => :sentence_blocks accepts_nested_attributes_for :sentence_blocks end class SentenceBlock < ActiveRecord::Base belongs_to :story has_many :sentences, :dependent => :destroy end class Sentence < ActiveRecord::Base belongs_to :sentence_block def story @sentence_block = SentenceBlock.find(self.sentence_block_id) Story.find(@sentence_block.story_id) end end The problem is occurring when using the show method of the Story. The Story method is as follows, and the associated show method for a sentence is also included. Sentence.show def show @sentence = Sentence.find(params[:id]) respond_to do |format| format.html {redirect_to(@sentence.story)} format.xml { render :xml => @sentence } end end Story.show def show @story = Story.find(params[:id]) @sentence_block = @story.sentence_blocks.build @new_sentence = @sentence_block.sentences.build(params[:sentence]) respond_to do |format| if @new_sentence.content != nil and @new_sentence.sentence_block_id != nil and @sentence_block.save and @new_sentence.save flash[:notice] = 'Sentence was successfully added.' format.html # new.html.erb format.xml { render :xml => @story } else @sentence_block.destroy format.html format.xml { render :xml => @story } end end end I'm getting a "couldn't find Sentence_block without and id" error. So I'm assuming that for some reason the sentence_block isn't getting saved to the database. Can anyone help me with my understanding of the behavior and why I'm getting the error? I'm trying to ensure that every time the view depicts show for a story, an unnecessary sentence_block and sentence isn't created, unless someone submits the form, I'm not really sure how to accomplish this. Any help would be appreciated.

    Read the article

  • How am I able to create A List<T> containing a generic Interface?

    - by Conrad Clark
    I have a List which must contain IInteract Objects. But IInteract is a generic interface which requires 2 type arguments. My main idea is iterate through a list of Objects and "Interact" one with another if they didn't interact yet. So i have this object List<IObject> WorldObjects = new List<IObject>(); and this one: private List<IInteract> = new List<IInteract>(); Except I can't compile the last line because IInteract requires 2 type arguments. But I don't know what the arguments are until I add them. I could add interactions between Objects of Type A and A... or Objects of Type B and C. I want to create "Interaction" classes which do something with the "acting" object and the "target" object, but I want them to be independent from the objects... so I could add an Interaction between for instance... "SuperUltraClass" and... an "integer". Am I using the wrong approach?

    Read the article

  • JSF - Creating an overlay for popup panels.

    - by Ben
    Hi, I've created an overlay that will popup whenever someone wants to upload a file to the system. The Gui looks like this (when the overlay is up) I have two problems with this: I attached a a4j:support object that, onclick, makes the overlay disappear. The problem with this is that when I click the upload button on the upload component, support catches the click event and closes the overlay with the upload component before I have the chance to finish the operation. I chose two different style classes. One for the overlay and one for the upload panel. But the styling of the overlay takes over the upload component and it becomes transparent as well. The implementation looks something like this: <h:panelgroup layout="block" styleClass="overlayClass"> <rich:fileUpload styleClass="uploadStyleClass"... /> <a4j:support event="onclick" action="#{mrBean.switchOverlayState}" reRender="..."/> </h:panelGroup> The CSS: .overlayClass { Opacity: 0.5; position: fixed; left: 0; right: 0; top: 0; bottom: 0; background: #000; } .uploadStyleClass { opacity: 1.0; ... } Thanks for the help!

    Read the article

  • How to get application context path in spring-ws?

    - by Dhaliwal
    I am using Spring-WS to create a webservice. In my project, I have created a Helper class to reads sample response and request xml file which are located in my /src/main/resource folder. When I am unit-testing my webservice application 'locally', I use the System.getProperty("user.dir") to get the application context folder. The following is a method that I created in the Helper class to help me retrieve the file that I am interested in my resource folder. public static File getFileFromResources(String filename) { System.out.println("Getting file from resource folder"); File request = null; String curDir = System.getProperty("user.dir"); String contextpath = "C:\\src\\main\\resources\\"; request = new File(curDir + contextpath + filename); return request; } However, after 'publishing' the compiled WAR file to the ../webapps folder to the Apache Tomcat directory, I realise that System.getProperty("user.dir") no longer returns my application context path. Instead, it is returning the Apache Tomcat root directory as shown C:\Program Files\Apache Software Foundation\Tomcat 6.0\src\main\resources\SampleClientFile I cant seem to find any information about getting the root folder of my webservice. I have seen examples on Spring web application where I can retrieve the context path by using the following : request.getSession().getServletContext().getContextPath() But in this case, I am using a Spring web application where there is a servlet context in the request. But the Spring-WS, my entry point is an endpoint. How can I get the context path of my webservice application. I am expecting a context path of something like C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\clientWebService\WEB-INF\classes Could someone suggest a way to achieve this?

    Read the article

  • .Net Entity Framework & POCO ... querying full table problem

    - by Chris Klepeis
    I'm attempting to implement a repository pattern with my poco objects auto generated from my edmx. In my repository class, I have: IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery(Func<E, bool> where) { return objectSet.Where(where).AsQueryable<E>(); } public IList<E> SelectAll(Func<E, bool> where) { return GetQuery(where).ToList(); } Where E is the one of my POCO classes. When I trace the database and run this: IList<Contact> c = contactRepository.SelectAll(r => r.emailAddress == "[email protected]"); It shows up in the sql trace as a select for everything in my Contact table. Where am I going wrong here? Is there a better way to do this? Does an objectset not lazy load... so it omitted the where clause? This is the article I read which said to use objectSet's... since with POCO, I do not have EntityObject's to pass into "E" http://devtalk.dk/CommentView,guid,b5d9cad2-e155-423b-b66f-7ec287c5cb06.aspx

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • UIViewController memory management

    - by jAmi
    Hi I have a very basic issue of memory management with my UIViewController (or any other object that I create); The problem is that in Instruments my Object allocation graph is always rising even though I am calling release on then assigning them nil. I have 2 UIViewController sub-classes each initializing with a NIB; I add the first ViewController to the main window like [window addSubView:first.view]; Then in my first ViewController nib file I have a Button which loads the second ViewController like : -(IBAction)loadSecondView{ if(second!=nil){ //second is set as an iVar and @property (nonatomic, retain)ViewController2* sceond; [second release]; second=nil; } second=[[ViewController2* second]initWithNibName:@"ViewController2" bundle:nil]; [self.view addSubView:second.view]; } In my (second) ViewController2 i have a button with an action method -(IBAction) removeSecond{ [self.view removeFromSuperView]; } Please let me know if the above scheme works in a managed way for memory...? In Instruments It does not show release of any allocation and keeps the bar status graph keeps on rising.

    Read the article

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Why does GCC need extra declarations in templates when VS does not?

    - by Kyle
    template<typename T> class Base { protected: Base() {} T& get() { return t; } T t; }; template<typename T> class Derived : public Base<T> { public: Base<T>::get; // Line A Base<T>::t; // Line B void foo() { t = 4; get(); } }; int main() { return 0; } If I comment out lines A and B, this code compiles fine under Visual Studio 2008. Yet when I compile under GCC 4.1 with lines A and B commented, I get these errors: In member function ‘void TemplateDerived::foo()’: error: ‘t’ was not declared in this scope error: there are no arguments to ‘get’ that depend on a template parameter, so a declaration of ‘get’ must be available Why would one compiler require lines A and B while the other doesn't? Is there a way to simplify this? In other words, if derived classes use 20 things from the base class, I have to put 20 lines of declarations for every class deriving from Base! Is there a way around this that doesn't require so many declarations?

    Read the article

  • ASP.NET MVC 2 - How do I use an Interface as the Type for a Strongly Typed View

    - by Rake36
    I'd like to keep my concrete classes separate from my views. Without using strongly typed views, I'm fine. I just use a big parameter list in the controller method signatures and then use my service layer factory methods to create my concrete objects. This is actually just fine with me, but it got me thinking and after a little playing, I realized it was literally impossible for a controller method to accept an interface as a method parameter - because it has no way of instantiating it. Can't create a strongly-typed view using an interface through the IDE either (which makes sense actually). So my question. Is there some way to tell the controller how to instantiate the interface parameter using my service layer factory methods? I'd like to convert from: [Authorize] [AcceptVerbs(HttpVerbs.Post)] [UrlRoute(Path = "Application/Edit/{id}")] public ActionResult Edit(String id, String TypeCode, String TimeCode, String[] SelectedSchoolSystems, String PositionChoice1, String PositionChoice2, String PositionChoice3, String Reason, String LocationPreference, String AvailableDate, String RecipientsNotSelected, String RecipientsSelected) { //New blank app IApplication _application = ApplicationService.GetById(id); to something like [Authorize] [AcceptVerbs(HttpVerbs.Post)] [UrlRoute(Path = "Application/Edit/{id}")] public ActionResult Edit(String id, IApplication app) { //Don't need to do this anymore //IApplication _application = ApplicationService.GetById(id);

    Read the article

  • Sorting out POCO, Repository Pattern, Unit of Work, and ORM

    - by CoffeeAddict
    I'm reading a crapload on all these subjects: POCO Repository Pattern Unit of work Using an ORM mapper ok I see the basic definitions of each in books, etc. but I can't visualize this all together. Meaning an example structure (DL, BL, PL). So what, you have your DL objects that contain your CRUD methods, then your BL objects which are "mapped" using an ORM back to your DL objects? What about DTOs...they're your DL objects right? I'm confused. Can anyone really explain all this together or send me example code? I'm just trying to put this together. I am determining whether to go LINQ to SQL or EF 4 (not sure about NHibrernate yet). Just not getting the concepts as in physical layers and code layers here and what each type of object contains (just properties for DTOs, and CRUDs for your core DL classes that match the table fields???). I just need some guidance here. I'm reading Fowler's books and starting to read Evans but just not all there yet.

    Read the article

  • Exporting WPF DataGrid to a text file in a nice looking format. Text file must be easy to read.

    - by Andrew
    Hi, Is there a simple way to export a WPF DataGrid to a text file and a .csv file? I've done some searching and can't see that there is a simple DataGrid method to do so. I have got something working (barely) by making use of the ItemsSource of the DataGrid. In my case, this is an array of structs. I do something with StreamWriters and StringBuilders (details available) and basically have: StringBuilder destination = new StringBuilder(); destination.Append(row.CreationDate); destination.Append(seperator); // seperator is '\t' or ',' for csv file destination.Append(row.ProcId); destination.Append(seperator); destination.Append(row.PartNumber); destination.Append(seperator); I do this for each struct in the array (in a loop). This works fine. The problem is that it's not easy to read the text file. The data can be of different lengths within the same column. I'd like to see something like: 2007-03-03 234238423823 823829923 2007-03-03 66 99 And of course, I get something like: 2007-03-03 234238423823 823829923 2007-03-03 66 99 It's not surprising giving that I'm using Tab delimiters, but I hope to do better. It certainly is easy to read in the DataGrid! I could of course have some logic to pad short values with spaces, but that seems quite messy. I thought there might be a better way. I also have a feeling that this is a common problem and one that has probably been solved before (hopefully within the .NET classes themselves). Thanks.

    Read the article

  • FF extension: saving a value in preferences and retrieving in the js file

    - by encryptor
    I am making an extension which should take a link as the user input only once. Then the entire extension keeps using that link on various functions in the JS file. When the user changes it, the value accessed by the js file also changes accordingly. I am using the following but it does not work for me var pref_manager = Components.classes["@mozilla.org/preferencesservice;1"].getService(Components.interfaces.nsIPrefService) function setInstance(){ if (pref_manager.prefHasUserValue("myvar")) { instance = pref_manager.getString("myvar"); alert(instance); } if(instance == null){ instance = prompt("Please enter webcenter host and port"); // Setting the value pref_manager.setString("myvar", instance); } } instance is the global variable in which i take the user input. The alert (instance) does not show up, which means there is some problem by the way i am saving the pref or extracting it. Can someone please help me with this. I have never worked with preferences before. so even if there are minor problems i might not be able to figure out.

    Read the article

  • Does QThread::sleep() require the event loop to be running?

    - by suszterpatt
    I have a simple client-server program written in Qt, where processes communicate using MPI. The basic design I'm trying to implement is the following: The first process (the "server") launches a GUI (derived from QMainWindow), which listens for messages from the clients (using repeat fire QTimers and asynchronous MPI receive calls), updates the GUI depending on what messages it receives, and sends a reply to every message. Every other process (the "clients") runs in an infinite loop, and all they are intended to do is send a message to the server process, receive the reply, go to sleep for a while, then wake up and repeat. Every process instantiates a single object derived from QThread, and calls its start() method. The run() method of these classes all look like this: from foo.cpp: void Foo::run() { while (true) { // Send message to the first process // Wait for a reply // Do uninteresting stuff with the reply sleep(3); // also tried QThread::sleep(3) } } In the client's code, there is no call to exec() anywhere, so no event loop should start. The problem is that the clients never wake up from sleeping (if I surround the sleep() call with two writes to a log file, only the first one is executed, control never reaches the second). Is this because I didn't start the event loop? And if so, what is the simplest way to achieve the desired functionality?

    Read the article

  • Building asynchronous cache pattern with JSP

    - by merweirdo
    I have a JSP that will take some 8 minutes to render. The code logic itself can not be made more efficient (it will update often and be updated by basically a pointy haired boss). I tried wrapping it with a caching layer like <%@ taglib uri="/WEB-INF/classes/oscache.tld" prefix="oscache" %> <oscache:cache time="60"> <div class="pagecontent"> ..... my logic </div> </oscache:cache> This is nice until the 60 seconds is over. The next query after that blocks until the 8 minutes of rendering is done with again. I would need a way to build a pattern something like: If there is no version of the dynamic content in the cache run the actual logic (and populate the cache for subsequent requests) If there is a non-expired version of the dynamic content in the cache serve the output of the JSP logic from the cache If there is an expired version of the dynamic content in the cache serve the output of the JSP logic still from the cache AND run the JSP logic in the background so that the cache gets updated transparently to the user - avoiding the user have to wait for 8 minutes I found out that at least EHCache might be able to do some asynchronous cache updating but it did not sadly seem to apply to the JSP tags... Also I have to take in 10-20 parameters for the actual logic of the JSP and some of them should be used as a key for caching. Code example and/or pointers would be greatly appreciated. I do not frankly care if the solution provided is extremely ugly. I just want a simple 5 minute caching with asynchronous cache update taking into account some parameters as a key.

    Read the article

  • forward invocation, by hand vs magically?

    - by John Smith
    I have the following two class: //file FruitTree.h @interface FruitTree : NSObject { Fruit * f; Leaf * l; } @end //file FruitTree.m @implementation FruitTree //here I get the number of seeds from the object f @end //file Fruit @interface Fruit : NSObject { int seeds; } -(int) countfruitseeds; @end My question is at the point of how I request the number of seeds from f. I have two choices. Either: Since I know f I can explicitly call it, i.e. I implement the method -(int) countfruitseeds { return [f countfruitseeds]; } Or: I can just use forwardInvocation: - (NSMethodSignature *)methodSignatureForSelector:(SEL)selector { // does the delegate respond to this selector? if ([f respondsToSelector:selector]) return [f methodSignatureForSelector:selector]; else if ([l respondsToSelector:selector]) return [l methodSignatureForSelector:selector]; else return [super methodSignatureForSelector: selector]; } - (void)forwardInvocation:(NSInvocation *)invocation { [invocation invokeWithTarget:f]; } (Note this is only a toy example to ask my question. My real classes have lots of methods, which is why I am asking.) Which is the better/faster method?

    Read the article

  • UIActivityIndicatorView in a class without a view

    - by Structurer
    Hi I have defined a class that does a lengthy task and I call it from several other classes. Now I want to show an Activity Indicator while this task is doing it's thing, and then remove it once it's done. Since this is just a boring background task, this class doesn't have a view, and I guess that is where I run into my problem. I can't get this thing to show. This is what I have done in my class: UIActivityIndicatorView *activityIndicator = [[UIActivityIndicatorView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 32.0f, 32.0f)]; [activityIndicator setCenter:CGPointMake(160.0f, 208.0f)]; activityIndicator.activityIndicatorViewStyle = UIActivityIndicatorViewStyleWhite; UIView *contentView = [[UIView alloc]initWithFrame:[[UIScreen mainScreen] applicationFrame]]; [contentView addSubview:activityIndicator]; [activityIndicator startAnimating]; // Do the class lengthy task that takes several seconds..... [contentView release]; [activityIndicator release]; I guess I do something wrong when I get the contentView, but how should I get it properly? Thanks for any advices...

    Read the article

  • Where do I put javaassist code?

    - by DutrowLLC
    I have an application running on google app engine. I'm using restlets and I have a couple of layers set up including the restlet layer, the model layer, the business layer, and the data layer. I'm attempting to use javaassist to modify some classes, but I'm unsure where to actually put the code. I tried to put the code in the static initialization block: public class Person { String firstName; String getFirstName(){return null;} static{ ClassPool pool = ClassPool.getDefault(); try { CtClass CtPerson = pool.get("Person"); CtMethod CtGetFirstName = CtPerson.getDeclaredMethod("GetFirstName"); CtGetFirstName.setBody("return firstName;"); CtPerson.toClass(); } catch (Exception e) { e.printStackTrace(); } } } ...but that resulted in this error: javassist.CannotCompileException:.....attempted duplicate class definition...". I guess it makes sense that I can't edit the class file in the middle of its generation. I know the code works because I was able to run it correctly by simply putting it in a location that would run when I sent the program a command. (accessed a Restlet resource). The code ran fine if an instance of the class had not already been instantiated, however once I instantiated an instance of the affected class, the javaassist code failed. I assume I need to put this code somewhere that it will only run either: once after the program starts, directly before a class is instantiated for the first time, or even better, during compile time.

    Read the article

  • jquery: Writing a method

    - by Mark
    This is the same question as this: http://stackoverflow.com/questions/2890620/jquery-running-a-function-in-a-context-and-adding-to-a-variable But that question was poorly posed. Trying again, I'm trying to do something like this: I have a bunch of elements I want to grab data from, the data is in the form of classes of the element children. <div id='container'> <span class='a'></span> <span class='b'></span> <span class='c'></span> </div> <div id='container2'> <span class='1'></span> <span class='2'></span> <span class='3'></span> </div> I have a method like this: jQuery.fn.grabData = function(expr) { return this.each(function() { var self = $(this); self.find("span").each(function (){ var info = $(this).attr('class'); collection += info; }); }); }; I to run the method like this: var collection = ''; $('#container').grabData(); $('#container2').grabData(); The collection should be adding to each other so that in the end I get this console.log(collection); : abc123 But collection is undefined in the method. How can I let the method know which collection it should be adding to? Thanks.

    Read the article

  • Wordpress/CSS: List Style Problem.

    - by Ben
    Hi all! I am reasonably new to Wordpress custom themes but I do have some knowlege of CSS, although I am terrible at customising other peoples code which is why I have the problem I am about to explain. I have started a website that my girlfriend and I are doing together. It uses the Simplo Wordpress theme which is pretty cool but when you add a post that has a list in it. It doesn't show the bullets on each of the items. I have tried adding 'list-style:circle;' to a the.postItem class and a couple of other divs and classes that I thought were relevant but it doesn't work. You can take a look for yourself, it may be easier to understand my problem that way. If you go to: http://brokeandstarving.com/234/bennos-shepherds-pie/ there is a list of ingredients and also the steps under the method is a list as well. My question is, where do I edit the CSS to make the bullets appear next to the list items? I hope that makes sense. Thanks in advance, - Ben

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >