Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 225/274 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Creating a 'flexible' XML schema

    - by Fiona Holder
    I need to create a schema for an XML file that is pretty flexible. It has to meet the following requirements: Validate some elements that we require to be present, and know the exact structure of Validate some elements that are optional, and we know the exact structure of Allow any other elements Allow them in any order Quick example: XML <person> <age></age> <lastname></lastname> <height></height> </person> My attempt at an XSD: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="person"> <xs:complexType> <xs:sequence> <xs:element name="firstname" minOccurs="0" type="xs:string"/> <xs:element name="lastname" type="xs:string"/> <xs:any processContents="lax" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Now my XSD satisfies requirements 1 and 3. It is not a valid schema however, if both firstname and lastname were optional, so it doesn't satisfy requirement 2, and the order is fixed, which fails requirement 4. Now all I need is something to validate my XML. I'm open to suggestions on any way of doing this, either programmatically in .NET 3.5, another type of schema etc. Can anyone think of a solution to satisfy all 4 requirements?

    Read the article

  • Recommended/Standard image thumbnails sizes

    - by Rakan
    Hello, Based on my experience in web development, HTML page designs for every website may have different < img tags showing the same image in different sizes. i.e: the website's main page might show the image in 500*500, the listing page might show it in 200*150 and other sections of the website will use different sizes. As you can see the ratios of the images to be displayed to the user are different and therefore will cause a problem in having the image stretched, pixelated and/or other quality problems. How i currently handle this issue is by using a jQuery cropping tool to make the user select the area to crop the thumbnail from. the cropped image is then resized using imagick to fit every position in the page design. However, although i can force the user to crop an image based on a pre-defined ratio, the HTML will still contain image placeholders which require a different ratio for its thumbnail. So for every image, i will have to force the user to crop the uploaded image based on every possible ratio my site requires. I don't think this is a friendly solution, how would you do it? is there a standard/recommended ratio for the web? Thanks

    Read the article

  • What's the best way of accessing a DRb object (e.g. Ruby Queue) from Scala (and Java)?

    - by Tom Morris
    I have built a variety of little scripts using Ruby's very simple Queue class, and share the Queue between Ruby and JRuby processes using DRb. It would be nice to be able to access these from Scala (and maybe Java) using JRuby. I've put together something Scala and the JSR-223 interface to access jruby-complete.jar. import javax.script._ class DRbQueue(host: String, port: Int) { private var engine = DRbQueue.factory.getEngineByName("jruby") private var invoker = engine.asInstanceOf[Invocable] engine.eval("require \"drb\" ") private var queue = engine.eval("DRbObject.new(nil, \"druby://" + host + ":" + port.toString + "\")") def isEmpty(): Boolean = invoker.invokeMethod(this.queue, "empty?").asInstanceOf[Boolean] def size(): Long = invoker.invokeMethod(this.queue, "length").asInstanceOf[Long] def threadsWaiting: Long = invoker.invokeMethod(this.queue, "num_waiting").asInstanceOf[Long] def offer(obj: Any) = invoker.invokeMethod(this.queue, "push", obj.asInstanceOf[java.lang.Object]) def poll(): Any = invoker.invokeMethod(this.queue, "pop") def clear(): Unit = { invoker.invokeMethod(this.queue, "clear") } } object DRbQueue { var factory = new ScriptEngineManager() } (It conforms roughly to java.util.Queue interface, but I haven't declared the interface because it doesn't implement the element and peek methods because the Ruby class doesn't offer them.) The problem with this is the type conversion. JRuby is fine with Scala's Strings - because they are Java strings. But if I give it a Scala Int or Long, or one of the other Scala types (List, Set, RichString, Array, Symbol) or some other custom type. This seems unnecessarily hacky: surely there has got to be a better way of doing RMI/DRb interop without having to use JSR-223 API. I could either make it so that the offer method serializes the object to, say, a JSON string and takes a structural type of only objects that have a toJson method. I could then write a Ruby wrapper class (or just monkeypatch Queue) to would parse the JSON. Is there any point in carrying on with trying to access DRb from Java/Scala? Might it just be easier to install a real message queue? (If so, any suggestions for a lightweight JVM-based MQ?)

    Read the article

  • Connect to Facebook with Javascript Client Library using Adobe AIR

    - by Chuck Hriczko
    I'm having an issue connecting to Facebook through the Javascript Client Library in Adobe AIR. It works fine in the browser but in Adobe AIR it seems to not be able to find the Facebook functions. Here is some code I copied from the Facebook website: (Oh and I have the xd_receiver.htm file in the correct path too) <textarea style="width: 1000px; height: 300px;" id="_traceTextBox"> </textarea> <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> // var api = FB.Facebook.apiClient; // require user to login api.requireLogin(function(exception){ FB.FBDebug.logLevel=1; FB.FBDebug.dump("Current user id is " + api.get_session().uid); // Get friends list //5-14-09: this code below is broken, correct code follows //api.friends_get(null, function(result){ // Debug.dump(result, 'friendsResult from non-batch execution '); // }); api.friends_get(new Array(), function(result, exception){ FB.FBDebug.dump(result, 'friendsResult from non-batch execution '); }); }); }); //]]> </script>

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • problem with QPropertyAnimation in Qt

    - by belhaouary
    Hi , i have a problem with QPropertyAnimation in Qt my code: QString my_text = "Hello Animation"; ui->textBrowser->setText((quote_text)); ui->textBrowser->show(); QPropertyAnimation animation2(ui->textBrowser,"geometry"); animation2.setDuration(1000); animation2.setStartValue(QRect(10,220/4,1,1)); animation2.setEndValue(QRect(10,220,201,71)); animation2.setEasingCurve(QEasingCurve::OutBounce); animation2.start(); till now it seems very good , but the problem is that i can see this animation only when i show a message box after it . QMessageBox m; m.setGeometry(QRect(100,180,100,50)); m.setText("close quote"); m.show(); m.exec(); when i remove the code of this message box , i can't see the animation anymore. the functionality of the program doesn't require showing this MessageBox at all. Can anybody help?

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • Where in the standard is forwarding to a base class required in these situations?

    - by pgast
    Maybe even better is: Why does the standard require forwarding to a base class in these situations? (yeah yeah yeah - Why? - Because.) class B1 { public: virtual void f()=0; }; class B2 { public: virtual void f(){} }; class D : public B1,public B2{ }; class D2 : public B1,public B2{ public: using B2::f; }; class D3 : public B1,public B2{ public: void f(){ B2::f(); } }; D d; D2 d2; D3 d3; EDG gives: sourceFile.cpp sourceFile.cpp(24) : error C2259: 'D' : cannot instantiate abstract class due to following members: 'void B1::f(void)' : is abstract sourceFile.cpp(6) : see declaration of 'B1::f' sourceFile.cpp(25) : error C2259: 'D2' : cannot instantiate abstract class due to following members: 'void B1::f(void)' : is abstract sourceFile.cpp(6) : see declaration of 'B and similarly for the MS compiler. I might buy the first case,D. But in D2 - f is unambiguously defined by the using declaration, why is that not enough for the compiler to be required to fill out the vtable? Where in the standard is this situation defined?

    Read the article

  • SQL query for an access database needed

    - by masfenix
    Hey guys, first off all sorry, i can't login using my yahoo provider. anyways I have this problem. Let me explain it to you, and then I'll show you a picture. I have a access db table. It has 'report id', 'recpient id', and 'recipient name' and 'report req'. What the table "means" is that do the user using that report still require it or can we decommission it. Here is how the data looks like (blocked out company userids and usernames): *check the link below, I cant post pictures cuz yahoo open id provider isnt working. So basically I need to have 3 select queries: 1) Select all the reports where for each report, ALL the users have said no to 'reportreq'. In plain English, i want a listing of all the reports that we have to decommission because no user wants it. 2) Select all the reports where the report is required, and the batchprintcopy is more then 0. This way we can see which report needs to be printed and save paper instead of printing all the reports. 3)A listing of all the reports where the reportreq field is empty. I think i can figure this one out myself. This is using Access/VBA and the data will be exported to an excel spreadsheet. I just a simple query if it exists, OR an alogorithm to do it quickly. I just tried making a "matrix" and it took about 2 hours to populate. https://docs.google.com/uc?id=0B2EMqbpeBpQkMTIyMzA5ZjMtMGQ3Zi00NzRmLWEyMDAtODcxYWM0ZTFmMDFk&hl=en_US

    Read the article

  • Design pattern to integrate Rails with a Comet server

    - by empire29
    I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria. It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action). The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare. The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner). Anyhow - Just wondering what the right way to go about organizing all of this is. Thanks

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • Extracting word from file using grep or sed

    - by Marco
    Hi, I have a file in the format below: File : \\dvtbbnkapp115\nautilus\030db28a-f241-4054-a0e3-9bfa7e002535.dip was processed. Entries Found : 0 Unarchived Documents : 1 File Size : 1 K Error : The following line could not be processed. Bad Document Type. Error : Marketing and Contact preference change update||7000003735||078ef1f3-db6b-46a8-bb0d-c40bb2296ab5.pdf File : \\dvtbbnkapp115\nautilus\078ef1f3-db6b-46a8-bb0d-c40bb2296ab5.dip was processed. Entries Found : 0 Unarchived Documents : 1 File Size : 1 K Error : The following line could not be processed. Bad Document Type. Error : Declined - Bureau Data (process)||7000003723|252204|2f1d71f4-052c-49f1-95cf-9ca9b4268f0c.pdf File : \\dvtbbnkapp115\nautilus\2f1d71f4-052c-49f1-95cf-9ca9b4268f0c.dip was processed. Entries Found : 0 Unarchived Documents : 1 File Size : 1 K Error : The following line could not be processed. Bad Document Type. Error : Unable to call - please contact|40640510016710|7000003180||3e6a792f-c136-4a4b-a654-37f4476ccef8.pdf I require to extract just the pdf file names after the double pipe and write them to a file. I am a novice when it comes to unix/sed/grep commands, i have tried but no luck? any ideas or examples i could use to extract the information above? thanks

    Read the article

  • Doubly Linked Lists Implementation

    - by user552127
    Hi All, I have looked at most threads here about Doubly linked lists but still unclear about the following. I am practicing the Goodrich and Tamassia book in Java. About doubly linked lists, please correct me if I am wrong, it is different from a singly linked list in that a node could be inserted anywhere, not just after the head or after the tail using both the next and prev nodes available, while in singly linked lists, this insertion anywhere in the list is not possible ? If one wants to insert a node in a doubly linked list, then the default argument should be either the node after the to-be inserted node or node before the to-be inserted node ? But if this is so, then I don't understand how to pass the node before or after. Should we be displaying all nodes that were inserted till now and ask the user to select the node before or after which some new node is to be inserted ? My doubt is how to pass this default node. Because I assume that will require the next and prev nodes of these nodes as well. For e.g, Head<-A<-B<-C<-D<-E<-tail If Z is the new node to be inserted after say D, then how should node D be passed ? I am confused with this though it seems pretty simple to most. But pl do explain. Thanks, Sanjay

    Read the article

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • How do I bind to a custom view in Cocoa using Xcode 4?

    - by Newt
    I'm a beginner when it comes to writing Mac apps and working with Cocoa, so please forgive my ignorance. I'm looking to create a custom view, that exposes some properties, which I can then bind to an NSObjectController. Since it's a custom view, the Bindings Inspector obviously doesn't list any of the properties I've added to the view that I can then bind to using Interface Builder. After turning to the Stackoverflow/Google for help, I've stumbled across a couple of possible solutions, but neither seem to be quite right for my situation. The first suggested creating an IBPlugin, which would then mean my bindings would be available in the Bindings Inspector. I could then bind the view to the controller using IB. Apparently IBPlugins aren't supported in Xcode 4, so that one's out the window. I'm also assuming (maybe wrongly) that IBPlugins are no longer supported because there's a better way of doing such things these days? The second option was to bind the controller to the view programmatically. I'm a bit confused as to exactly how I would achieve this. Would it require subclassing NSObjectController so I can add the calls to bind to the view? Would I need to add anything to the view to support this? Some examples I've seen say you'd need to override the bind method, and others say you don't. Also, I've noticed that some example custom views call [self exposeBinding:@"bindingName"] in the initializer. From what I gather from various sources, this is something that's related to IBPlugins and isn't something I need to do if I'm not using them. Is that correct? I've found a post on Stackoverflow here which seems to discuss something very similar to my problem, but there wasn't any clear winner as to the best answer. The last comment by noa on 12th Sept seems interesting, although they mention you should be calling exposeBinding:. Is this comment along the right track? Is the call to exposeBinding really necessary? Apologies for any dumb questions. Any help greatly appreciated.

    Read the article

  • @selector and return value

    - by user320926
    The idea it's very easy, i have an http download class, this class must support the http authentication but it's basically a background thread so i would like to avoid to prompt directly to the screen, i would like to use a delegate method to require from outside of the class, like a viewController. But i don't know if is possible or if i have to use a different syntax. This class use this delegate protocol: //Updater.h @protocol Updater <NSObject> -(NSDictionary *)authRequired; @optional -(void)statusUpdate:(NSString *)newStatus; -(void)downloadProgress:(int)percentage; @end @interface Updater : NSThread { ... } This is the call to the delegate method: //Updater.m // This check always fails :( if ([self.delegate respondsToSelector:@selector(authRequired:)]) { auth = [delegate authRequired]; } This is the implementation of the delegate method //rootViewController.m -(NSDictionary *)authRequired; { // TODO: some kind of popup or modal view NSMutableDictionary *ret=[[NSMutableDictionary alloc] init]; [ret setObject:@"utente" forKey:@"user"]; [ret setObject:@"password" forKey:@"pass"]; return ret; }

    Read the article

  • Multithreaded Delegates/Events

    - by Matt
    I am trying to disable parts of the UI in a .NET app based on polling done on a background thread. The background thread checks to see if the global database connection the app uses is still open and operable. What I need to do, and would prefer to do it without polling on the UI thread, is to add an event handler that can be raised by the background thread if the connection status changes. That way, any form can have a handler that will disable those parts of the UI that require the connection to function. Attempting to use a straight event declaration in the class that holds the thread sub, and raising the event in the background thread causing cross-thread execution errors regarding accessing UI controls from other threads. Obviously, there is a correct way to do this, but we have limited experience with events (single threaded apps mainly), and almost none with delegates. I've read through documentation and examples for delegates, and it seems to be closer to what we need, but I'm not sure how to make it work in this instance. The app is written mainly in VB.NET, but an example or help in C# is fine too.

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Imlpementations of an Interface with Different Types?

    - by b3njamin
    Searched as best I could but unfortunately I've learned nothing relevant; basically I'm trying to work around the following problem in C#... For example, I have three possible references (refA, refB, refC) and I need to load the correct one depending on a configuration option. So far however I can't see a way of doing it that doesn't require me to use the name of said referenced object all through the code (the referenced objects are provided, I can't change them). Hope the following code makes more sense: public ??? LoadedClass; public Init() { /* load the object, according to which version we need... */ if (Config.Version == "refA") { Namespace.refA LoadedClass = new refA(); } else if (Config.Version == "refB") { Namespace.refB LoadedClass = new refB(); } else if (Config.Version == "refC") { Namespace.refC LoadedClass = new refC(); } Run(); } private void Run(){ { LoadedClass.SomeProperty... LoadedClass.SomeMethod(){ etc... } } As you can see, I need the Loaded class to be public, so in my limited way I'm trying to change the type 'dynamically' as I load in which real class I want. Each of refA, refB and refC will implement the same properties and methods but with different names. Again, this is what I'm working with, not by my design. All that said, I tried to get my head around Interfaces (which sound like they're what I'm after) but I'm looking at them and seeing strict types - which makes sense to me, even if it's not useful to me. Any and all ideas and opinions are welcome and I'll clarify anything if necessary. Excuse any silly mistakes I've made in the terminology, I'm learning all this for the first time. I'm really enjoying working with an OOP language so far though - coming from PHP this stuff is blowing my mind :-)

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Pass an array from IronRuby to C#

    - by cgyDeveloper
    I'm sure this is an easy fix and I just can't find it, but here goes: I have a C# class (let's call it Test) in an assembly (let's say SOTest.dll). Here is something along the lines of what I'm doing: private List<string> items; public List<string> list_items() { return this.items; } public void set_items(List<string> new_items) { this.items = new_items; } In the IronRuby interpreter I run: >>> require "SOTest.dll" true >>> include TestNamespace Object >>> myClass = Test.new TestNamespace.Test >>> myClass.list_items() ['Apples', 'Oranges', 'Pears'] >>> myClass.set_items ['Peaches', 'Plums'] TypeError: can't convert Array into System::Collections::Generic::List(string) I get a similar error whether I make the argument a 'List< string ', 'List< object ' or 'string[ ]'. What is the proper syntax? I can't find a documented mapping of types anywhere (because it's likely too complicated to define in certain scenarios given what Ruby can do).

    Read the article

  • Bash PATH: How long is too long?

    - by ajwood
    Hi, I'm currently designing a software quarantine pattern to use on Ubuntu. I'm not sure how standard "quarantine" is in this context, so here is what I hope to accomplish... Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run). I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts. One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines. Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Might a path be so long that it affects performance? Thanks very much, Andrew p.s. Any other wisdom that might help my design would be greatly appreciated :)

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >