Search Results

Search found 13430 results on 538 pages for 'self updating'.

Page 395/538 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Hierarchical/Nested Database Structure for Comments

    - by Stephen Melrose
    Hi, I'm trying to figure out the best approach for a database schema for comments. The problem I'm having is that the comments system will need to allow nested/hierarchical comments, and I'm not sure how to design this out properly. My requirements are, Comments can be made on comments, so I need to store the tree hierarchy I need to be able to query the comments in the tree hierarchy order, but efficiently, preferably in a fast single query, but I don't know if this is possible I'd need to make some wierd queries, e.g. pull out the latest 5 root comments, and a maximum of 3 children for each one of those I read an article on the MySQL website on this very subject, http://dev.mysql.com/tech-resources/articles/hierarchical-data.html The "Nested Set Model" in theory sounds like it will do what I need, except I'm worried about querying the thing, and also inserting. If this is the right approach, How would I do my 3rd requirement above? If I have 2000 comments, and I add a new sub-comment on the first comment, that will be a LOT of updating to do. This doesn't seem right to me? Or is there a better approach for the type of data I'm wanting to store and query? Thank you

    Read the article

  • Why doesn't Default route work using Html.ActionLink in this case?

    - by StuperUser
    I have a rather perculiar issue with routing. Coming back to routing after not having to worry about configuration for it for a year, I am using the default route and ignore route for resources: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }); I have a RulesController with an action for Index and Lorem and a Index.aspx, Lorem.aspx in Views Rules directory. I have an ActionLink aimed at Rules/Index on the maseter page: <li><div><%: Html.ActionLink("linkText", "Index", "Rules")%></div></li> The link is being rendered as http://localhost:12345/Rules/ and am getting a 404. When I type Index into the URL the application routes it to the action. When I change the default route action from "Index" to "Lorem", the action link is being rendered as http://localhost:12345/Rules/Index adding the Index as it's no longer on the default route and the application routes to the Index action correctly. I have used Phil Haack's Routing Debugger, but entering the url http://localhost:12345/Rules/ is causing a 404 using that too. I think I've covered all of the rookie mistakes, relevant SO questions and basic RTFMs. I'm assuming that "Rules" isn't any sort of reserved word in routing. Other than updating the Routes and debuugging them, what can I look at?

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Is Git ready to be recommended to my boss?

    - by Mike Weller
    I want to recomment Git to my boss as a new source control system, since we're stuck in the 90s with VSS (ouch), but are the tools and 3rd party support good enough yet? Specifically I'm talking about GUI front-ends similar to TortoiseSVN, decent visual diff/merge support, as well as stuff like email commit notifications and general support from 3rd parties like IDEs and build systems. Even though this will be used by programmers, we really need this kind of stuff in our team. I don't want to leave everyone stuck with a new tool, and even a new source control paradigm (distributed), with nothing but a command-line app and some online tutorials. This would be a step backwards. So what do you think... is Git ready? What decent tools exist for Git and what third party development apps support it? EDIT: My original question was pretty vague so I'm updating it to specifically ask for a list of available tools and 3rd party support for Git. Maybe we can get a community wiki post with a list of stuff. I also do not consider 'use subversion' to be an adequate answer. There are other reasons to use a distributed source control system other than offline editing - private and cheap branches being one of them.

    Read the article

  • Building asynchronous cache pattern with JSP

    - by merweirdo
    I have a JSP that will take some 8 minutes to render. The code logic itself can not be made more efficient (it will update often and be updated by basically a pointy haired boss). I tried wrapping it with a caching layer like <%@ taglib uri="/WEB-INF/classes/oscache.tld" prefix="oscache" %> <oscache:cache time="60"> <div class="pagecontent"> ..... my logic </div> </oscache:cache> This is nice until the 60 seconds is over. The next query after that blocks until the 8 minutes of rendering is done with again. I would need a way to build a pattern something like: If there is no version of the dynamic content in the cache run the actual logic (and populate the cache for subsequent requests) If there is a non-expired version of the dynamic content in the cache serve the output of the JSP logic from the cache If there is an expired version of the dynamic content in the cache serve the output of the JSP logic still from the cache AND run the JSP logic in the background so that the cache gets updated transparently to the user - avoiding the user have to wait for 8 minutes I found out that at least EHCache might be able to do some asynchronous cache updating but it did not sadly seem to apply to the JSP tags... Also I have to take in 10-20 parameters for the actual logic of the JSP and some of them should be used as a key for caching. Code example and/or pointers would be greatly appreciated. I do not frankly care if the solution provided is extremely ugly. I just want a simple 5 minute caching with asynchronous cache update taking into account some parameters as a key.

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • How to generate lots of redundant ajax elements like checkboxes and pulldowns in Django?

    - by iJames
    Hello folks. I've been getting lots of answers from stackoverflow now that I'm in Django just be searching. Now I hope my question will also create some value for everybody. In choosing Django, I was hoping there was some similar mechanism to the way you can do partials in ROR. This was going to help me in two ways. One was in generating repeating indexed forms or form elements, and also in rendering only a piece of the page on the round trip. I've done a little bit of that by using taconite with a simple URL click but now I'm trying to get more advanced. This will focus on the form issue which boils down to how to iterate over a secondary object. If I have a list of photo instances, each of which has a couple of parameters, let's say a size and a quantity. I want to generate form elements for each photo instance separately. But then I have two lists I want to iterate on at the same time. Context: photos : Photo.objects.all() and forms = {} for photo in photos: forms[photo.id] = PhotoForm() In other words we've got a list of photo objects and a dict of forms based on the photo.id. Here's an abstraction of the template: {% for photo in photos %} {% include "photoview.html" %} {% comment %} So here I want to use the photo.id as an index to get the correct form. So that each photo has its own form. I would want to have a different action and each form field would be unique. Is that possible? How can I iterate on that? Thanks! {% endcomment %} Quantity: {{ oi.quantity }} {{ form.quantity }} Dimensions: {{ oi.size }} {{ form.size }} {% endfor %} What can I do about this simple case. And how can I make it where every control is automatically updating the server instead of using a form at all? Thanks! James

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • Whatfor Visual Studio?! ml, cl, and link exe-cutables would suffice

    - by AntonIO
    It says in /library article /9s7c9wdw : "You can start this tool [cl.exe] only from the Visual Studio command prompt. You cannot start it from a system command prompt or from Windows Explorer." The corresponding (v=VS.80) page geared towards Visual Studio 2005 makes no such mention. Moreover, there is this Q&A. Thing is: Why should anybody spend anything on VS? ml is provided free of charge- necessarily so since it poses no value addition. The combined size of the other two is 895kb. Uncompressed. The GUI is a disservice. I myself have found half a dozen bugs. However, if the above is true, you'd need the IDE. MSFT fanboys, please step up. Background is that I have the 2008 Pro ed. The official Firefox builds use VS 2005 which I have on another system. To me no redundancy is acceptable. That's when I started pondering about boiling down VS and merely copying over the essential binaries. Then extended the thought to synthetically updating V$.

    Read the article

  • Exception calling remote SOAP call from thread

    - by Duncan
    This is an extension / next step of this question I asked a few minutes ago. I've a Delphi application with a main form and a thread. Every X seconds the thread makes a web services request for a remote object. It then posts back to the main form which handles updating the UI with the new information. I was previously using a TTimer object in my thread, and when the TTimer callback function ran, it ran in the context of the main thread (but the remote web services request did work). This rather defeated the purpose of the separate thread, and so I now have a simple loop and sleep routine in my thread's Execute function. The problem is, an exception is thrown when returning from GetIMySOAPService(). procedure TPollingThread.Execute; var SystemStatus : TCWRSystemStatus; begin while not Terminated do begin sleep(5000); try SystemStatus := GetIMySOAPService().GetSystemStatus; PostMessage( ParentHandle, Integer(apiSystemStatus), Integer(SystemStatus), 0 ); SystemStatus.DataContext := nil; LParam(SystemStatus) := 0; except end; end; end; Can anyone advise as to why this exception is being thrown when calling this function from the thread? I'm sure I'm overlooking something fundamental and simple. Thanks, Duncan

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • Facebook IFrame Application issues for certain users

    - by Kon
    We have a strange issue with running an Facebook IFrame application (using MVC 2). When I run my app and log into Facebook, I get to the application just fine. But when my coworker does it, she gets the following error: API Error Code: 100 API Error Description: Invalid parameter Error Message: Requires valid next URL. Typically this error is resolved by updating the "New Data Permissions" setting of the Facebook application. However, in this case it doesn't help. We've also tried logging in with our accounts from different computers and it seems that neither computer nor which one the MVC ASP.NET app is running from matters. The only difference is who is logged into Facebook. We've looked at our Facebook account settings, but couldn't find any obvious differences. We both have Developer access to the FB application and we both can edit its settings. However, only one of us can actually run the application without getting the above mentioned error message. Any idea what could be happening here?

    Read the article

  • Programatically rebuild .exd-files when loading VBA

    - by aspartame
    Hi, After updating Microsoft Office 2007 to Office 2010 some custom VBA scripts embedded in our software failed to compile with the following error message: Object library invalid or contains references to object definitions that could not be found. As far as I know, this error is a result of a security update from Microsoft (Microsoft Security Advisory 960715). When adding ActiveX-controls to VBA scripts, information about the controls are stored in cache files on the local hard drive (.exd-files). The security update modified some of these controls, but the .exd-files were not automatically updated. When the VBA scripts try to load the old versions of the controls stored in the cached files, the error occurs. These cache-files must be removed from the hard drive in order for the controls to load successfully (which will create new, updated .exd-files automatically). What I would like to do is to programatically (using Visual C++) remove the outdated .exd-files when our software loads. When opening a VBA project using CApcProject::ApcProject.Open I set the following flag:axProjectThrowAwayCompiledState. TestHR(ApcProject.Open(pHost, (MSAPC::AxProjectFlag) (MSAPC::axProjectNormal | MSAPC::axProjectThrowAwayCompiledState))); According to the documentation, this flag should cause the VBA project to be recompiled and the temporary files to be deleted and rebuilt. I've also tried to update the checksum of the host application type library which should have the same effect. However none of these fixes seem to do the job and I'm running out of ideas. Help is very much appreciated!

    Read the article

  • Class design when working with dataset

    - by MC
    If you have to retrieve data from a database and bring this dataset to the client, and then allow the user to manipulate the data in various ways before updating the database again, what is a good class design for this if the data tables will not have a 1:1 relationship with the class objects? Here are some I came up with: Just manipulate the DataSet itself on the client and then send it back to the database as is. This will work though obviously the code will be very dirty and not well-structured. Same as #1, but wrap the dataset code around classes. What I mean is that you may have a class that takes a dataset or a datatable in its constructor, and then provides public methods and properties to simplify the code. Inside these methods and properties it will be reading or manipulating the dataset. To update the database afterwards will be easy because you already have the updated dataset. Get rid of the dataset entirely on the client, convert to objects, then convert back to a dataset when needing to update the database. Is there any good resources where I can find information on this?

    Read the article

  • Which ORM to use?

    - by Paja
    I'm developing an application which will have these classes: class Shortcut { public string Name { get; } public IList<Trigger> Triggers { get; } public IList<Action> Actions { get; } } class Trigger { public string Name { get; } } class Action { public string Name { get; } } And I will have 20+ more classes, which will derive from Trigger or Action, so in the end, I will have one Shortcut class, 15 Action-derived classes and 5 Trigger-derived classes. My question is, which ORM will best suit this application? EF, NH, SubSonic, or maybe something else (Linq2SQL)? I will be periodically releasing new application versions, adding more triggers and actions (or changing current triggers/actions), so I will have to update database schema as well. I don't know if EF or NH provides any good methods to easily update the schema. Or if they do, is there any tutorial how to do that? I've already found this article about NH schema updating, quoting: Fortunately NHibernate provides us the possibility to update an existing schema, that is NHibernate creates an update script which can the be applied to the database. I've never found how to actually generate the update script, so I can't tell NH to update the schema. Maybe I've misread something, I just didn't found it. Last note: If you suggest EF, will be EF 1.0 suitable as well? I would rather use some older .NET than 4.0.

    Read the article

  • I did my own web framework: now, how keep it sync with applications? must I use versions?

    - by Daniel Koch
    ... and I did the first web application using it, now I'm going to create the second. In this first web application I enhanced the framework's core library with new things and promptly updated framework branch. I'm using bazaar to keep framework and web application committed. The application was in the beginning, a full branch of framework source tree, now I'm updating framework manually at every change on core files. (copying changed files from web app to framework's branch). With this second web application that I'm going to create, I need to know about versions (or revisions) which the application is based. If I found a bug in this version I can fix and then sync files with first web application no worrying: functions will be the same to this application. If I'm going to make changes in core (new behavior, new functions in library or something new in source tree) it must be named as "new version". What's the best way to do this? Because I'm using a Distributed Version Control System (bazaar), I'm not dealing with VERSIONS, but revision numbers that change every time. Please fresh my mind with new ideas.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • Problems with objectatasource,giving attributes like delete insert and update

    - by kamal
    After going to the process of adding the various attributes like insert,delete and update.But when i run it through the browser ,editing works but updating and deleting doesn't !(for the update and shows the same thing for delete,my friends think i need to use codes to repair the problems,can you help me please.it shows this: Server Error in '/WebSite3' Application. ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: First_name, Surname, Original_author_id, First name, original_author id.] System.Web.UI.WebControls.ObjectDataSourceView.GetResolvedMethodData(Type type, String methodName, IDictionary allParameters, DataSourceOperation operation) +1119426 System.Web.UI.WebControls.ObjectDataSourceView.ExecuteUpdate(IDictionary keys, IDictionary values, IDictionary oldValues) +1008 System.Web.UI.DataSourceView.Update(IDictionary keys, IDictionary values, IDictionary oldValues, DataSourceViewOperationCallback callback) +92 System.Web.UI.WebControls.GridView.HandleUpdate(GridViewRow row, Int32 rowIndex, Boolean causesValidation) +907 System.Web.UI.WebControls.GridView.HandleEvent(EventArgs e, Boolean causesValidation, String validationGroup) +704 System.Web.UI.WebControls.GridView.OnBubbleEvent(Object source, EventArgs e) +95 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.GridViewRow.OnBubbleEvent(Object source, EventArgs e) +123 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.LinkButton.OnCommand(CommandEventArgs e) +118

    Read the article

  • Parallel Tasking Concurrency with Dependencies on Python like GNU Make

    - by Brian Bruggeman
    I'm looking for a method or possibly a philosophical approach for how to do something like GNU Make within python. Currently, we utilize makefiles to execute processing because the makefiles are extremely good at parallel runs with changing single option: -j x. In addition, gnu make already has the dependency stacks built into it, so adding a secondary processor or the ability to process more threads just means updating that single option. I want that same power and flexibility in python, but I don't see it. As an example: all: dependency_a dependency_b dependency_c dependency_a: dependency_d stuff dependency_b: dependency_d stuff dependency_c: dependency_e stuff dependency_d: dependency_f stuff dependency_e: stuff dependency_f: stuff If we do a standard single thread operation (-j 1), the order of operation might be: dependency_f -> dependency_d -> dependency_a -> dependency_b -> dependency_e \ -> dependency_c For two threads (-j 2), we might see: 1: dependency_f -> dependency_d -> dependency_a -> dependency_b 2: dependency_e -> dependency_c Does anyone have any suggestions on either a package already built or an approach? I'm totally open, provided it's a pythonic solution/approach. Please and Thanks in advance!

    Read the article

  • How to report progress of a JavaScript function?

    - by LambyPie
    I have a JavaScript function which is quite long and performs a number of tasks, I would like to report progress to the user by updating the contents of a SPAN element with a message as I go. I tried adding document.getElementById('spnProgress').innerText = ... statements throughout the function code. However, whilst the function is executing the UI will not update and so you only ever see the last message written to the SPAN which is not very helpful. My current solution is to break the task up into a number of functions, at the end of each I set the SPAN message and then "trigger" the next one with a window.setTimeout call with a very short delay (say 10ms). This yields control and allows the browser to repaint the SPAN with the updated message before starting the next step. However I find this very messy and difficult to follow the code, I'm thinking there must be a better way. Does anyone have any suggestions? Is there any way to force the SPAN to repaint without having to leave the context of the function? Thanks

    Read the article

  • Using Subsonic 3.0 Advanced Templates

    - by umit
    Hi all, I've been trying to use Subsonic Advanced Templates in a project for a while but most of the time I find myself writing a Stored Procedure as I can't find a proper way of doing it in code. Subsonic created corresponding objects for my DB tables and for foreign keys it created IQueryable fields inside each object. These fields are not loaded by default and a new SQL query is executed when you access them. 1- Is there a way to get all data in one query (deep load)? Also these fields can not be assigned. So when I want to create an object in a maintenance page, I can't put all the data into this object before saving it in DB: Post post = new Post(); //get photos for this post IList<PostPhoto> postPhotos = GetPostPhotos(); post.PostPhotos = postPhotos; 2- Is it possible to have one Post object with all fields set from user input? Think of the Post object above and assume I've successfully assigned its fields. Now I need to save it to the DB. 3- Is using BatchQuery the only way to do it in one query? If I have 4 photos in PostPhotos field; 2 of them previously saved and 2 of them new, can I use the Update method to handle both the adding and updating of these photos? Any ideas or links are appreciated. Cheers...

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >