Search Results

Search found 27047 results on 1082 pages for 'multiple projects'.

Page 955/1082 | < Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >

  • Best practices for managing updating a database with a complex set of changes

    - by Sarge
    I am writing an application where I have some publicly available information in a database which I want the users to be able to edit. The information is not textual like a wiki but is similar in concept because the edits bring the public information increasingly closer to the truth. The changes will affect multiple tables and the update needs to be automatically checked before affecting the public tables. I'm working on the design and I'm wondering if there are any best practices that might help with some particular issues. I want to provide undo capability. I want to show the user the combined result of all their changes. When the user says they're done, I need to check the underlying public data to make sure it hasn't been changed by somebody else. My current plan is to have the user work in a set of tables setup to be a private working area. Once they're ready they can kick off a process to check everything and update the public tables. Undo can be recorded using Command pattern saving to a table. Are there any techniques I might have missed or useful papers or patterns? Thanks in advance!

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • Refresh decorator

    - by Morgoth
    I'm trying to write a decorator that 'refreshes' after being called, but where the refreshing only occurs once after the last function exits. Here is an example: @auto_refresh def a(): print "In a" @auto_refresh def b(): print "In b" a() If a() is called, I want the refresh function to be run after exiting a(). If b() is called, I want the refresh function to be run after exiting b(), but not after a() when called by b(). Here is an example of a class that does this: class auto_refresh(object): def __init__(self, f): print "Initializing decorator" self.f = f def __call__(self, *args, **kwargs): print "Before function" if 'refresh' in kwargs: refresh = kwargs.pop('refresh') else: refresh = False self.f(*args, **kwargs) print "After function" if refresh: print "Refreshing" With this decorator, if I run b() print '---' b(refresh=True) print '---' b(refresh=False) I get the following output: Initializing decorator Initializing decorator Before function In b Before function In a After function After function --- Before function In b Before function In a After function After function Refreshing --- Before function In b Before function In a After function After function So when written this way, not specifying the refresh argument means that refresh is defaulted to False. Can anyone think of a way to change this so that refresh is True when not specified? Changing the refresh = False to refresh = True in the decorator does not work: Initializing decorator Initializing decorator Before function In b Before function In a After function Refreshing After function Refreshing --- Before function In b Before function In a After function Refreshing After function Refreshing --- Before function In b Before function In a After function Refreshing After function because refresh then gets called multiple times in the first and second case, and once in the last case (when it should be once in the first and second case, and not in the last case).

    Read the article

  • How can I work around the fact that in C++, sin(M_PI) is not 0?

    - by Adam Doyle
    In C++, const double Pi = 3.14159265; cout << sin(Pi); // displays: 3.58979e-009 it SHOULD display the number zero I understand this is because Pi is being approximated, but is there any way I can have a value of Pi hardcoded into my program that will return 0 for sin(Pi)? (a different constant maybe?) In case you're wondering what I'm trying to do: I'm converting polar to rectangular, and while there are some printf() tricks I can do to print it as "0.00", it still doesn't consistently return decent values (in some cases I get "-0.00") The lines that require sin and cosine are: x = r*sin(theta); y = r*cos(theta); BTW: My Rectangular - Polar is working fine... it's just the Polar - Rectangular Thanks! edit: I'm looking for a workaround so that I can print sin(some multiple of Pi) as a nice round number to the console (ideally without a thousand if-statements) edit: In case anyone's curious, this was what I landed on: double sin2(double theta) // in degrees { double s = sin(toRadians(theta)); if (fabs(s - (int)s) < 0.000001) { return floor(s + 0.5); } return s; } where toRadians() is a macro that converts to radians

    Read the article

  • How to add an XML parameter to a stored procedure in C#?

    - by salvationishere
    I am developing a C# web application in VS 2008 which interacts with my Adventureworks database in my SQL Server 2008. Now I am trying to add new records to one of the tables which has an XML column in it. How do I do this? This is the error I'm getting: System.Data.SqlClient.SqlException was caught Message="XML Validation: Text node is not allowed at this location, the type was defined with element only content or with simple content. Location: /" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=16 LineNumber=22 Number=6909 Procedure="AppendDataC" Server="." State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at ADONET_namespace.ADONET_methods.AppendDataC(DataRow d, Hashtable ht) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 212 InnerException: And this is a portion of my code in C#: try { SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataC"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; ... sqlParam10.SqlDbType = SqlDbType.VarChar; SqlParameter sqlParam11 = cmd.Parameters.AddWithValue("@" + ht["@col11"], d[10]); sqlParam11.SqlDbType = SqlDbType.VarChar; SqlParameter sqlParam12 = cmd.Parameters.AddWithValue("@" + ht["@col12"], d[11]); sqlParam12.SqlDbType = SqlDbType.Xml; ... conn2.Open(); cmd.ExecuteNonQuery(); //This is the line it fails on and then jumps //to the Catch statement conn2.Close(); errorMsg = "The Person.Contact table was successfully updated!"; } catch (Exception ex) { Right now in my text input MDF file I have the XML parameter as: '<Products><id>3</id><id>6</id><id>15</id></Products>' Is this valid format for XML?

    Read the article

  • EAGLContext, EAGLSharegroups, RenderBuffers, FrameBuffers, oh my!

    - by quixoto
    Hi all, I'm trying to wrap my head around the OpenGL object model on iPhone OS. I'm currently rendering into a few different UIViews (build on CAEAGLayers) on the screen. I currently have each of these as using separate EAGLContext, each of which has a color renderbuffer and a framebuffer. I'm rendering similar things in them, and I'd like to share textures between these instances to save memory overhead. My current understanding is that I could use the same setup (some number of contexts, each with a FBO/RBO) but if I spawn the later ones using the EAGLShareGroup of the first one, then I can simply use the texture names (GLuints) from the first one in the later ones. Is this accurate? If this is the case, I guess the followup question is: what's the benefit to having it be a "sharegroup"? Could I just reuse the same context, and attach multiple FBOs/RBOs to that context? I think I'm struggling with the abstraction layer of a sharegroup, which seems to share "objects" (textures and other named things) but not "state" (matrices, enabled/disabled states) which are owned by the context. What's the best way to think of this? Thanks for any enlightenment!

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Visual Studio soft-crashing when encountering XAML Errors in initialize.

    - by Aren
    I've been having some serious issues with Visual Studio 2010 as of late. It's been crashing in a peculiar way when I encounter certain types of XAML errors during the InitializeComponent() of a control/window. The program breaks and visual studio gears up like it's catching an exception (because it is) and then stops midway displaying a broken highlight in my XAML file with no details as to what is wrong. Example: There is not pop outs, or details Anywhere about what is wrong, only a callstack that points to my InitializeComponent() call. Now normally I'd just do some trial and error to fix this problem, and find out where i messed up, but the real problem isn't my code. Visual Studio is rendered completely useless at this point. It reports my application still in "Running" mode. The Stop/Break/Restart buttons on the toolbar or in the menus don't do anything (but grey out). Closing the application does not stop this behaviour, closing visual studio gets it stuck in a massive loop where it yells at me complaining every file open is not in the debug project, then repeats this process when i have exausted every open file. I have to force-close devenv.exe, and after this happening 3-4 times in a row it's a lot of wasted time (as my projects are usually pretty big and studio can be quite slow @ loading). To the point Has anyone else experienced this? How can I stop studio from locking up. Can I at LEAST get information out of this beast another way so i can fix my XAML error sooner rather than after 3-4 trial-and-error compiles yielding the same crash? Any & All help would be appreciated. Visual Studio 2010 version: 10.0.30319.1RTM Edit & Update FWIW, mostly the errors that cause this are XamlParseExceptions (I figured this out after i found what was wrong with my XAML). I think I need to be clearer though, Im not looking for the solution to my code problem, as these are usually typos / small things, I'm looking for a solution to VStudio getting all buggered up as a result. The particular error in the above image that 100% for sure caused this was a XamlParseException caused by forgetting a Value attribute on a data trigger. I've fixed that part but it still doesn't tell my why my studio becomes a lump of neutered program when a perfectly normal exception is thrown in the parsing of the XAML. Code that will cause this issue (at least for me) This is the base template WPF Application, with the following Window.xaml code. The problem is a missing Value="True" on the <DataTrigger ...> in the template. It generates a XamlParseException and Visual Studio Crashes as described above when debugging it. Final Notes The following solutions did not help me: Restarting Visual Studio Rebooting Reinstalling Visual Studio

    Read the article

  • alignment and granularity of mmap

    - by OwnWaterloo
    I am confused by the specification of mmap. Let pa be the return address of mmap (the same as the specification) pa = mmap(addr, len, prot, flags, fildes, off); In my opinion after the function call succeed the following range is valid [ pa, pa+len ) My question is whether the range of the following is still valid? [ round_down(pa, pagesize) , round_up(pa+len, pagesize) ) [ base, base + size ] for short That is to say: is the base always aligned on the page boundary? is the size always a multiple of pagesize (the granularity is pagesize in other words) Thanks for your help. I think it is implied in this paragraph : The off argument is constrained to be aligned and sized according to the value returned by sysconf() when passed _SC_PAGESIZE or _SC_PAGE_SIZE. When MAP_FIXED is specified, the application shall ensure that the argument addr also meets these constraints. The implementation performs mapping operations over whole pages. Thus, while the argument len need not meet a size or alignment constraint, the implementation shall include, in any mapping operation, any partial page specified by the range [pa,pa+len). But I'm not sure and I do not have much experience on POSIX. Please show me some more explicit and more definitive evidence Or show me at least one system which supports POSIX and has different behavior Thanks agian.

    Read the article

  • Search book by title, and author

    - by Swoosh
    I got a table with columns: author firstname, author lastname, and booktitle Multiple users are inserting in the database, through an import, and I'd like to avoid duplicates. So I'm trying to do something like this: I have a record in the db: First Name: "Isaac" Last Name: "Assimov" Title: "I, Robot" If the user tries to add it again, it would be basically a non-split-text (would not be split up into author firstname, author lastname, and booktitle) So it would basically look like this: "Isaac Asimov - I Robot" or "Asimov, Isaac - I Robot" or "I Robot by Isaac Asimov" You see where I am getting at? (I cannot force the user to split up all the books into into author firstname, author lastname, and booktitle, and I don't even like the idea to force the user, because it's not too user-friendly) What is the best way (in SQL) to compare all this possible bookdata scenarios to what I have in the database, not to add the same book twice. I was thinking about a possibility of suggesting the user: "is THIS the book you are trying to add?" (imagine a list instead of the THIS word, just like on stackoverflow - ask question - Related Questions. I was thinking about soundex and maybe even the like operators, but so far i didn't get the results i was hoping.

    Read the article

  • Python: Networked IDLE?

    - by Rosarch
    Is there any existing web app that lets multiple users work with an interactive IDLE type session at once? Something like: IDLE 2.6.4 Morgan: >>> letters = list("abcdefg") Morgan: >>> # now, how would you iterate over letters? Jack: >>> for char in letters: print "char %s" % char char a char b char c char d char e char f char g Morgan: >>> # nice nice If not, I would like to create one. Is there some module I can use that simulates an interactive session? I'd want an interface like this: def class InteractiveSession(): ''' An interactive Python session ''' def putLine(line): ''' Evaluates line ''' pass def outputLines(): ''' A list of all lines that have been output by the session ''' pass def currentVars(): ''' A dictionary of currently defined variables and their values ''' pass (Although that last function would be more of an extra feature.) To formulate my problem another way: I'd like to create a new front end for IDLE. How can I do this?

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • Wpf binding with nested properties

    - by byte
    ViewModel I have a property of type Member called KeyMember. The 'Member' type has an ObservableCollection called Addresses. The Address is composed of two strings - street and postcode . View I have a ListBox whose item source need to be set to ViewModels's KeyMember property and it should display the Street of all the Past Addresses in the collection. Question My ViewModel and View relationship is established properly. I am able to write a data template for the above simple case as below <ListBox ItemsSource="{Binding KeyMember.Addresses}"> <ListBox.ItemTemplate> <DataTemplate DataType="Address"> <TextBlock Text="{Binding Street}"/> </DataTemplate> </ListBox.ItemTemplate> </ListBox> How would I write the DataTemplate if I change KeyMember from type Member to ObservableCollection< Member assuming that the collection has only one element. PS: I know that for multiple elements in collection, I will have to implement the Master-Detail pattern/scenario.

    Read the article

  • Read HTML file into email

    - by cast01
    Ive written a script to send html emails and all works well. I have the email stored in a separate HTML file which is then read in usin a while loop and fgets(). However, i want to be able to pass variables into the html. For example, in a html file i may have something like.. <body> Dear Name <br/> Thank you for your recent purhcase </body> and i read this into a string like so $file = fopen($filename, "r"); while(!feof($file)) { $html.= fgets($file, 4096); } fclose ($file); I want to be able to replace "Name" in the html file by a variable and im not entirely sure on the best way to do this. I could always make my own tag and then use regex to replace that with the name once ive read the file into the string, but im wondering if there is a better/easier method to do this. (On a side note, if anyone knows whether its better to use file_get_contents instead of multiple calls to fgets, id be interested to know)

    Read the article

  • TeamCity + HG. Only pull (push?) passing builds

    - by ColoradoMatt
    Feels like with the popularity of continuous integration this one should be a piece of cake but I am stumped. I am setting up TeamCity with HG. I want to be able to push changesets up to a repository that TeamCity watches and runs builds on changes. That's easy. Next, if a build passes, I want that changeset to be pulled into a "clean" repository... one that contains only passing changesets. Should be easy but... TeamCity 6 supports multiple build steps and if any step fails, the rest don't run. My thought was to put a build step at the end that does a pull (or optionally a push?) to get the passing changeset into the clean repository. I am trying to use PsExec to run hg on the box with the repositories. If I try to run just a plain 'hg pull' it can't find the hg.exe even though it is set in the path and I have used the -w flag. I have tried putting a .bat file in the clean repository that takes a revision parameter and it works fine... locally. When I try to run the .bat file remotely (using PsExec) it runs everything fine but it tries to run it on the build agent. Even if I set the -w argument it runs the .bat file there but tries to run the contents on the build agent box. Am I just WAY off in my approach? Seems like this is a pretty obvious thing to do so either my Google skills are waning or no one thinks this is worthy of writing about. Either way, I am stuck in SVN land trying to get out so I would appreciate some help!

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • [Java] Cluster Shared Cache

    - by GuiSim
    Hi everyone. I am searching for a java framework that would allow me to share a cache between multiple JVMs. What I would need is something like Hazelcast but without the "distributed" part. I want to be able to add an item in the cache and have it automatically synced to the other "group member" cache. If possible, I'd like the cache to be sync'd via a reliable multicast (or something similar). I've looked at Shoal but sadly the "Distributed State Cache" seems like an insufficient implementation for my needs. I've looked at JBoss Cache but it seems a little overkill for what I need to do. I've looked at JGroups, which seems to be the most promising tool for what I need to do. Does anyone have experiences with JGroups ? Preferably if it was used as a shared cache ? Any other suggestions ? Thanks ! EDIT : We're starting tests to help us decide between Hazelcast and Infinispan, I'll accept an answer soon. EDIT : Due to a sudden requirements changes, we don't need a distributed map anymore. We'll be using JGroups for a low level signaling framework. Thanks everyone for you help.

    Read the article

  • What's the best Scala build system?

    - by gatoatigrado
    I've seen questions about IDE's here -- Which is the best IDE for Scala development? and What is the current state of tooling for Scala?, but I've had mixed experiences with IDEs. Right now, I'm using the Eclipse IDE with the automatic workspace refresh option, and KDE 4's Kate as my text editor. Here are some of the problems I'd like to solve: use my own editor IDEs are really geared at everyone using their components. I like Kate better, but the refresh system is very annoying (it doesn't use inotify, rather, maybe a 10s polling interval). The reason I don't use the built-in text editor is because broken auto-complete functionalities cause the IDE to hang for maybe 10s. rebuild only modified files The Eclipse build system is broken. It doesn't know when to rebuild classes. I find myself almost half of the time going to project-clean. Worse, it seems even after it has finished building my project, a few minutes later it will pop up with some bizarre error (edit - these errors appear to be things that were previously solved with a project clean, but then come back up...). Finally, setting "Preferences / Continue launch if project contains errors" to "prompt" seems to have no effect for Scala projects (i.e. it always launches even if there are errors). build customization I can use the "nightly" release, but I'll want to modify and use my own Scala builds, not the compiler that's built into the IDE's plugin. It would also be nice to pass [e.g.] -Xprint:jvm to the compiler (to print out lowered code). fast compiling Though Eclipse doesn't always build right, it does seem snappy -- even more so than fsc. I looked at Ant and Maven, though haven't employed either yet (I'll also need to spend time solving #3 and #4). I wanted to see if anyone has other suggestions before I spend time getting a suboptimal build system working. Thanks in advance! UPDATE - I'm now using Maven, passing a project as a compiler plugin to it. It seems fast enough; I'm not sure what kind of jar caching Maven does. A current repository for Scala 2.8.0 is available [link]. The archetypes are very cool, and cross-platform support seems very good. However, about compile issues, I'm not sure if fsc is actually fixed, or my project is stable enough (e.g. class names aren't changing) -- running it manually doesn't bother me as much. If you'd like to see an example, feel free to browse the pom.xml files I'm using [github]. UPDATE 2 - from benchmarks I've seen, Daniel Spiewak is right that buildr's faster than Maven (and, if one is doing incremental changes, Maven's 10 second latency gets annoying), so if one can craft a compatible build file, then it's probably worth it...

    Read the article

  • Efficient method to calculate the rank vector of a list in Python

    - by Tamás
    I'm looking for an efficient way to calculate the rank vector of a list in Python, similar to R's rank function. In a simple list with no ties between the elements, element i of the rank vector of a list l should be x if and only if l[i] is the x-th element in the sorted list. This is simple so far, the following code snippet does the trick: def rank_simple(vector): return [rank for rank in sorted(range(n), key=vector.__getitem__)] Things get complicated, however, if the original list has ties (i.e. multiple elements with the same value). In that case, all the elements having the same value should have the same rank, which is the average of their ranks obtained using the naive method above. So, for instance, if I have [1, 2, 3, 3, 3, 4, 5], the naive ranking gives me [0, 1, 2, 3, 4, 5, 6], but what I would like to have is [0, 1, 3, 3, 3, 5, 6]. Which one would be the most efficient way to do this in Python? Footnote: I don't know if NumPy already has a method to achieve this or not; if it does, please let me know, but I would be interested in a pure Python solution anyway as I'm developing a tool which should work without NumPy as well.

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • Filtering records in app-engine (Java)

    - by Manjoor
    I have following code running perfectly. It filter records based on single parameter. public List<Orders> GetOrders(String email) { PersistenceManager pm = PMF.get().getPersistenceManager(); Query query = pm.newQuery(Orders.class); query.setFilter("Email == pEmail"); query.setOrdering("Id desc"); query.declareParameters("String pEmail"); query.setRange(0,50); return (List<Orders>) query.execute(email); } Now i want to filter on multiple parameters. sdate and edate is Start Date and End Date. In datastore it is saved as Date (not String). public List<Orders> GetOrders(String email,String icode,String sdate, String edate) { PersistenceManager pm = PMF.get().getPersistenceManager(); Query query = pm.newQuery(Orders.class); query.setFilter("Email == pEmail"); query.setFilter("ItemCode == pItemCode"); query.declareParameters("String pEmail"); query.declareParameters("String pItemCode"); .....//Set filter and declare other 2 parameters .....// ...... query.setRange(0,50); query.setOrdering("Id desc"); return (List<Orders>) query.execute(email,icode,sdate,edate); } Any clue?

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • Database design for a media server containing movies, music, tv and everything in between?

    - by user364114
    In the near future I am attempting to design a media server as a personal project. MY first consideration to get the project underway is architecture, it will certainly be web based but more specifically than that I am looking for suggestions on the database design. So far I am considering something like the following, where I am using [] to represent a table, the first text is the table name to give an idea of purpose and the items within {} would be fields of the table. Also not, fid is functional id referencing some other table. [Item {id, value/name, description, link, type}] - this could be any entity, single song or whole music album, game, movie - almost see this as a recursive relation, ie. a song is an item but an album that song is part of is also an item or for example a tv season is an item, with multiple items being tv episodes [Type {id, fid, mime type, etc}] - file type specific information - could identify how code handles streaming/sending this item to a user [Location {id, fid, path to file?}] [Users {id, username, email, password, ...? }] - user account information [UAC {id, fid, acess level}] - i almost feel its more flexible to seperate access control permissions form the user accounts themselves [ItemLog {id, fid, fid2, timestamp}] - fid for user id, and fid2 for item id - this way we know what user access what when [UserLog {id, fid, timestamp}] -both are logs for access, whether login or last item access [Quota {id, fid, cap}] - some sort of way to throttle users from queing up the entire site and letting it download ... Suggestions or comments are welcome as the hope is that this project will be a open source project once some code is laid out.

    Read the article

< Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >