Search Results

Search found 49170 results on 1967 pages for 'running objects'.

Page 248/1967 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Making a concurrent AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request concurrently with an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Making an AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request during an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • SPPersistedObject and List<T>

    - by Sam
    Hi I want sharepoint to "persist" a List of object I wrote a class SPAlert wich inherit from SPPersistedObject : public class SMSAlert: SPPersistedObject { [Persisted] private DateTime _scheduledTime; [Persisted] private Guid _listId; [Persisted] private Guid _siteID; } Then I wrote a class wich inherit from SPJobDefinition an add a List of my previous object: public sealed class MyCustomJob: SPJobDefinition { [Persisted] private List<SMSAlert> _SMSAlerts; } The problem is : when I call the Update method of y MyCustomJob: myCustomJob.Update(); It throw an exception : message : An object in the SharePoint administrative framework, depends on other objects which do not exist. Ensure that all of the objects dependencies are created and retry this operation. stack at Microsoft.SharePoint.Administration.SPConfigurationDatabase.StoreObject(SPPersistedObject obj, Boolean storeClassIfNecessary, Boolean ensure) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.PutObject(SPPersistedObject obj, Boolean ensure) at Microsoft.SharePoint.Administration.SPPersistedObject.Update() at Microsoft.SharePoint.Administration.SPJobDefinition.Update() at Sigi.Common.AlertBySMS.SmsAlertHandler.ScheduleJob(SPWeb web, SPAlertHandlerParams ahp) inner exception An object in the SharePoint administrative framework depends on other objects which do not exist. The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Dependencies1_Objects". The conflict occurred in database "SharePoint_Config, table "dbo.Objects", column 'Id'. The statement has been terminated. Can anyone help me with that??

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Why is ExecuteFunction method only available through base.ExecuteFunction in a child class of Object

    - by Matt
    I'm trying to call ObjectContext.ExecuteFunction from my objectcontext object in the repository of my site. The repository is generic, so all I have is an ObjectContext object, rather than one that actually represents my specific one from the Entity Framework. Here's an example of code that was generated that uses the ExecuteFunction method: [global::System.CodeDom.Compiler.GeneratedCode("System.Data.Entity.Design.EntityClassGenerator", "4.0.0.0")] public global::System.Data.Objects.ObjectResult<ArtistSearchVariation> FindSearchVariation(string source) { global::System.Data.Objects.ObjectParameter sourceParameter; if ((source != null)) { sourceParameter = new global::System.Data.Objects.ObjectParameter("Source", source); } else { sourceParameter = new global::System.Data.Objects.ObjectParameter("Source", typeof(string)); } return base.ExecuteFunction<ArtistSearchVariation>("FindSearchVariation", sourceParameter); } But what I would like to do is something like this... public class Repository<E, C> : IRepository<E, C>, IDisposable where E : EntityObject where C : ObjectContext { private readonly C _ctx; // ... public ObjectResult<E> ExecuteFunction(string functionName, params[]) { // Create object parameters return _ctx.ExecuteFunction<E>(functionName, /* parameters */) } } Anyone know why I have to call ExecuteFunction from base instead of _ctx? Also, is there any way to do something like I've written out? I would really like to keep my repository generic, but with having to execute stored procedures it's looking more and more difficult... Thanks, Matt

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • Update table columns bound to NSArrayController

    - by Loz
    Hi, I'm fairly new to the world of bindings in cocoa, and I'm having some troubles (perhaps/probably due to a misunderstanding). I have a singleton that contains an NSMutableArray called plugins, containing objects of class Plugin. It has a method called loadPlugins which adds objects to the plugins array. This may be called at any point. It's been added as an instance in Interface Builder. Also in IB is an NSObjectController, whose content outlet is connected to the singleton. There is also an NSArrayController, whose contentArray is bound to the NSObjectController (controller key is 'selection', model key path is 'plugins', object class name is 'Plugin'). And finally I have a table view with 2 columns, the values of which are bound to the NSArrayController's arrangedObjects, using keys of properties in the Plugin class. So far so standard (as far as I can tell from tutorials at least). My trouble is that when the loadPlugins method is called in the singleton, and objects are added to the plugins array, the table doesn't update to show the objects (unless loadPlugins is called before the nib is loaded). -reloadData called on the tableView doesn't do anything. Is there a way to tell the NSArrayController that the referenced array has been updated? I understand there is the -add: method for NSArrayController, which could be used in the loadPlugins, but this isn't desirable as I want to keep the singleton totally separate from the display aspect. This seems related to: http://stackoverflow.com/questions/1623396/refresh-cocoa-binding-nsarraycontroller-combobox The line: "editing the array behind the controller's back" seems to perhaps pinpoint the problem, but I would hope that it would be possible to have the singleton not know about the controller. Thanks in advance.

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • Stopping the service and the babysited application before uninstalling

    - by Viv Coco
    Hi all, I have a service MyService.exe that is babysitting my application MyApp.exe, meaning it starts the application when this one crashes or whatever. Basically when the service is stopped the application is stopped (by the service) and when the service is started the application is started by the service. In order to stop my service and by that my application when uninstalling I'm doing: <ServiceControl Id='MyServiceControl' Name='MyServiceForTest' Start='install' Stop='uninstall' Remove='uninstall'/> But when I want to uninstall everything I get the error message: "The setup must update files or services that cannot be updated while the system is running. If you choose to continue, a reboot will be required to complete the setup.". If I manually stop the service before running the uninstaller I don't get this msg as both my service and my application aren't then running anymore. In the log file I noticed that this happens in InstallValidate and I get this message b/c of MyApp.exe being running. I think what happens is: the uninstallers checks the running applications, it notices that the MyService.exe and MyApp.exe are both running, detects probably that the MyService.exe will be stopped by the uninstaller itself as instructed, but doesn't know about the MyApp.exe that this one will also be terminated once the service will be stopped so it will show the reboot-message. I can't just close MyApp.exe from uninstaller b/c the service will restart it again. How could I solve this problem so that the user won't need to reboot or to manually stop the service before doing an uninstall/upgrade? Also, I can't change MyService and MyApp code anymore so I will have to do this from the (un)installer only. TIA, Viv

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • Oracle sqlldr: column not allowed here

    - by Wade Williams
    Can anyone spot the error in this attempted data load? The '\\N' is because this is an import of an OUTFILE dump from mysql, which puts \N for NULL fields. The decode is to catch cases where the field might be an empty string, or might have \N. Using Oracle 10g on Linux. load data infile objects.txt discardfile objects.dsc truncate into table objects fields terminated by x'1F' optionally enclosed by '"' (ID INTEGER EXTERNAL NULLIF (ID='\\N'), TITLE CHAR(128) NULLIF (TITLE='\\N'), PRIORITY CHAR(16) "decode(:PRIORITY, BLANKS, NULL, '\\N', NULL)", STATUS CHAR(64) "decode(:STATUS, BLANKS, NULL, '\\N', NULL)", ORIG_DATE DATE "YYYY-MM-DD HH:MM:SS" NULLIF (ORIG_DATE='\\N'), LASTMOD DATE "YYYY-MM-DD HH:MM:SS" NULLIF (LASTMOD='\\N'), SUBMITTER CHAR(128) NULLIF (SUBMITTER='\\N'), DEVELOPER CHAR(128) NULLIF (DEVELOPER='\\N'), ARCHIVE CHAR(4000) NULLIF (ARCHIVE='\\N'), SEVERITY CHAR(64) "decode(:SEVERITY, BLANKS, NULL, '\\N', NULL)", VALUED CHAR(4000) NULLIF (VALUED='\\N'), SRD DATE "YYYY-MM-DD" NULLIF (SRD='\\N'), TAG CHAR(64) NULLIF (TAG='\\N') ) Sample Data (record 1). The ^_ represents the unprintable 0x1F delimiter. 1987^_Component 1987^_\N^_Done^_2002-10-16 01:51:44^_2002-10-16 01:51:44^_import^_badger^_N^_^_N^_0000-00-00^_none Error: Record 1: Rejected - Error on table objects, column SEVERITY. ORA-00984: column not allowed here

    Read the article

  • Persisting Joda DateTime instead of Java Date in hibernate

    - by Tauren
    My entities currently contain java Date properties. I'm starting to use Joda Time for date manipulation and calculations quite frequently. This means that I'm constantly having to convert my Dates into Joda DateTime objects and back again. So I was wondering, is there any reason I shouldn't just change my entities to store Joda DateTime objects instead of Java Date objects? Please note that these entities are persisted via Hibernate. I found the jodatime-hibernate project, but I also was reading on the Joda mailing list that it wasn't compatible with newer versions of hibernate. And it seems like it isn't very well maintained. So I'm wondering if it would be best to just continue converting between Date and DateTime, or if it would be wise to start persisting DateTime objects. My concern is being reliant on a poorly maintained library. Edit: Note that one of my objectives is to be better able to store timezone information. Storing just a Date appears to save the date in the local timezone. As my application can be used globally, I need to know the timezone as well. Joda Time Hibernate seems to address this as well in the user guide.

    Read the article

  • Can nginx be used as a reverse proxy for a backend websocket server?

    - by John Reilly
    We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections. Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port. Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic. Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now? Thanks in advance for any advice or suggestions. :) -John

    Read the article

  • Java Collections and Garbage Collector

    - by Anth0
    A little question regarding performance in a Java web app. Let's assume I have a List<Rubrique> listRubriques with ten Rubrique objects. A Rubrique contains one list of products (List<product> listProducts) and one list of clients (List<Client> listClients). What exactly happens in memory if I do this: listRubriques.clear(); listRubriques = null; My point of view would be that, since listRubriques is empty, all my objects previously referenced by this list (including listProducts and listClients) will be garbage collected pretty soon. But since Collection in Java are a little bit tricky and since I have quite performance issues with my app i'm asking the question :) edit : let's assume now that my Client object contains a List<Client>. Therefore, I have kind of a circular reference between my objects. What would happen then if my listRubrique is set to null? This time, my point of view would be that my Client objects will become "unreachable" and might create a memory leak?

    Read the article

  • Django LFS - custom views

    - by owca
    For all those ligthning fast shop users. I'm trying to implement my own first page view that will list all products from shop ( under '/' address). So I have a template : {% extends "lfs/shop/shop_base.html" %} {% block content %} <div id="najnowsze_produkty"> <ul> {% for obj in objects %} <li> {{ obj.name }} </li> {% endfor %} </ul> </div> {% endblock %} and then I've edited main shop view : from lfs.catalog.models import Category from lfs.catalog.models import Product def shop_view(request, template_name="lfs/shop/shop.html"): products = Product.objects.all() shop = lfs_get_object_or_404(Shop, pk=1) return render_to_response(template_name, RequestContext(request, { "shop" : shop, "products" : products })) but it just shows nothing. When I do Product.objects.all() query in shell I get results. Any ideas what could cause the problem ? Maybe I should filter products with 'active' status only ? But I'm not sure if it can influence all objects in any way.

    Read the article

  • hudson.util.ProcessTreeTest test error

    - by senzacionale
    error: Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec Running hudson.util.ProcessTreeTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! Running hudson.model.LoadStatisticsTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.089 sec Running hudson.util.ArgumentListBuilderTest Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec Running hudson.util.RobustReflectionConverterTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec Running hudson.util.VersionNumberTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec Running hudson.util.CyclicGraphDetectorTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.038 sec Results : Tests in error: testRemoting(hudson.util.ProcessTreeTest) Tests run: 102, Failures: 0, Errors: 1, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to D:\PROJEKTI\Maven\hudson\main\core\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 58 seconds [INFO] Finished at: Fri Jun 11 21:04:46 CEST 2010 [INFO] Final Memory: 85M/152M [INFO] ------------------------------------------------------------------------ error log: ------------------------------------------------------------------------------- Test set: hudson.util.ProcessTreeTest ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! testRemoting(hudson.util.ProcessTreeTest) Time elapsed: 0.169 sec <<< ERROR! org.jvnet.winp.WinpException: Failed to read environment variable table error=299 at .\envvar-cmdline.cpp:114 at org.jvnet.winp.Native.getCmdLineAndEnvVars(Native Method) at org.jvnet.winp.WinProcess.parseCmdLineAndEnvVars(WinProcess.java:114) at org.jvnet.winp.WinProcess.getEnvironmentVariables(WinProcess.java:109) at hudson.util.ProcessTree$Windows$1.getEnvironmentVariables(ProcessTree.java:419) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:274) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:255) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:215) at hudson.remoting.UserRequest.perform(UserRequest.java:114) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:270) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) does anyone have any idea what can be wrong in test? Regards

    Read the article

  • How will Arel affect rails' includes() 's capabilities.

    - by Tim Snowhite
    I've looked over the Arel sources, and some of the activerecord sources for Rails 3.0, but I can't seem to glean a good answer for myself as to whether Arel will be changing our ability to use includes(), when constructing queries, for the better. There are instances when one might want to modify the conditions on an activerecord :include query in 2.3.5 and before, for the association records which would be returned. But as far as I know, this is not programmatically tenable for all :include queries: (I know some AR-find-includes make t#{n}.c#{m} renames for all the attributes, and one could conceivably add conditions to these queries to limit the joined sets' results; but others do n_joins + 1 number of queries over the id sets iteratively, and I'm not sure how one might hack AR to edit these iterated queries.) Will Arel allow us to construct ActiveRecord queries which specify the resulting associated model objects when using includes()? Ex: User :has_many posts( has_many :comments) User.all(:include => :posts) #say I wanted the post objects to have their #comment counts loaded without adding a comment_count column to `posts`. #At the post level, one could do so by: posts_with_counts = Post.all(:select => 'posts.*, count(comments.id) as comment_count', :joins => 'left outer join comments on comments.post_id = posts.id', :group_by => 'posts.id') #i believe #But it seems impossible to do so while linking these post objects to each #user as well, without running User.all() and then zippering the objects into #some other collection (ugly) #OR running posts.group_by(&:user) (even uglier, with the n user queries)

    Read the article

  • Subversion svn:externals - What's wrong here?

    - by Brandon Montgomery
    I first want to say I've read the Subversion manual. I've read this question. I've also read this question. Here's my dilemma. Let's say I have 3 repositories laid out like this: DataAccessObject/ branches/ tags/ trunk/ DataAccessObject/ DataAccessObjectTests/ PlanObject/ branches/ tags/ trunk/ PlanObject/ PlanObjectTests/ WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ The PlanObject and DataAccessObject repositories contain shared projects. They are used by the WinFormsPlanViewer, but also by several other projects in several other repositories. Bear with me here. I put an svn:externals definition on the WinFormsPlanViewer/trunk folder like this: https://server/svn/PlanObject/trunk Objects https://server/svn/DataAccessObject/trunk Objects And here's what I see after I do an svn update. WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ Objects/ DataAccessObject/ DataAccessObjectTests/ The PlanObject stuff doesn't even come down in the update! I don't know if this has anything to do with it, but there's an externals definition on the PlanObject/trunk folder also: https://server/svn/DataAccessObject/trunk Objects What's going on here? What am I doing wrong? Are there bad consequences of referencing the PlanObject and the DataAccessObject from the WinFormsPlanViewer using svn:externals when the PlanObject references the DataAccessObject using svn:externals also?

    Read the article

  • Help me understand entity framework 4 caching for lazy loading

    - by Chris
    I am getting some unexpected behaviour with entity framework 4.0 and I am hoping someone can help me understand this. I am using the northwind database for the purposes of this question. I am also using the default code generator (not poco or self tracking). I am expecting that anytime I query the context for the framework to only make a round trip if I have not already fetched those objects. I do get this behaviour if I turn off lazy loading. Currently in my application I am breifly turning on lazy loading and then turning it back off so I can get the desired behaviour. That pretty much sucks, so please help. Here is a good code example that can demonstrate my problem. Public Sub ManyRoundTrips() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() 'makes unnessesary round trip to the database, I just loaded the employees' MessageBox.Show(context.Employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) context.Orders.Execute(System.Data.Objects.MergeOption.AppendOnly) For Each emp As Employee In employees 'makes unnessesary trip to database every time despite orders being pre loaded.' Dim i As Integer = emp.Orders.Count Next End Sub Public Sub OneRoundTrip() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Include("Orders").Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() MessageBox.Show(employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) For Each emp As Employee In employees Dim i As Integer = emp.Orders.Count Next End Sub Why is the first block of code making unnessesary round trips?

    Read the article

  • From ASPX to WCF

    - by Barguast
    I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based. I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results). At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial. My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is; Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this? Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose. Are there any other approaches I should consider? Thanks for your time spent reading this. I hope you can help.

    Read the article

  • Using Pylint with Django

    - by rcreswick
    I would very much like to integrate pylint into the build process for my python projects, but I have run into one show-stopper: One of the error types that I find extremely useful--:E1101: *%s %r has no %r member*--constantly reports errors when using common django fields, for example: E1101:125:get_user_tags: Class 'Tag' has no 'objects' member which is caused by this code: def get_user_tags(username): """ Gets all the tags that username has used. Returns a query set. """ return Tag.objects.filter( ## This line triggers the error. tagownership__users__username__exact=username).distinct() # Here is the Tag class, models.Model is provided by Django: class Tag(models.Model): """ Model for user-defined strings that help categorize Events on on a per-user basis. """ name = models.CharField(max_length=500, null=False, unique=True) def __unicode__(self): return self.name How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of objects, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.) Edit: The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :) See here for a summary of the problems I've had with pychecker and pyflakes -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.)

    Read the article

  • I'm trying to install psycopg2 onto Mac OS 10.6.3; it claims it can't find "stdarg.h" but I can see

    - by cojadate
    I'm desperately trying to successfully install psycopg2 but keep running into errors. The latest one seems to involve it not being to find "stdarg.h" (see code below). However I can see with my own eyes that a file called stdarg.h exists at /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h (where it claims it can't find anything) so I've no idea what to do about it. I'm running Mac OS 10.6.3 and within the last few days I've made sure I have all the latest OS developer tools. I have Python 2.6.2 and PostgreSQL 8.4 if that makes any difference. python setup.py install running install running build running build_py running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.3-fat-2.6 creating build/temp.macosx-10.3-fat-2.6/psycopg gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.2.1 (dt dec ext pq3)" -DPG_VERSION_HEX=0x080404 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -DHAVE_PQPROTOCOL3=1 -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -I. -I/opt/local/include/postgresql84 -I/opt/local/include/postgresql84/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.3-fat-2.6/psycopg/psycopgmodule.o In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/folders/MQ/MQ-tWOWWG+izzuZCrAJpzk+++TI/-Tmp-//ccakFhRS.out error: command 'gcc' failed with exit status

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • Problem with duplicates in a SQL Join

    - by Chris Ballance
    I have the following result set from a join of three tables, an articles table, a products table, an articles to products mapping table. I would like to have the results with duplicates removed similar to a select distinct on content id. Current result set: [ContendId] [Title] [productId] 1 article one 2 1 article one 3 1 article one 9 4 article four 1 4 article four 10 4 article four 14 5 article five 1 6 article six 8 6 article six 10 6 article six 11 6 article six 13 7 article seven 14 Desired result set: [ContendId] [Title] [productId] 1 article one * 4 article four * 5 article five * 6 article six * 7 article seven * Here is condensed example of the relevant SQL: IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'tempdb.dbo.products') AND type = (N'U')) drop table tempdb.dbo.products go CREATE TABLE tempdb.dbo.products ( productid int, productname varchar(255) ) go IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articles') AND type = (N'U')) drop table tempdb.dbo.articles go create table tempdb.dbo.articles ( contentid int, title varchar(255) ) IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articles') AND type = (N'U')) drop table tempdb.dbo.articles go create table tempdb.dbo.articles ( contentid int, title varchar(255) ) IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articleproducts') AND type = (N'U')) drop table tempdb.dbo.articleproducts go create table tempdb.dbo.articleproducts ( contentid int, productid int ) insert into tempdb.dbo.products values (1,'product one'), (2,'product two'), (3,'product three'), (4,'product four'), (5,'product five'), (6,'product six'), (7,'product seven'), (8,'product eigth'), (9,'product nine'), (10,'product ten'), (11,'product eleven'), (12,'product twelve'), (13,'product thirteen'), (14,'product fourteen') insert into tempdb.dbo.articles VALUES (1,'article one'), (2, 'article two'), (3, 'article three'), (4, 'article four'), (5, 'article five'), (6, 'article six'), (7, 'article seven'), (8, 'article eight'), (9, 'article nine'), (10, 'article ten') INSERT INTO tempdb.dbo.articleproducts VALUES (1,2), (1,3), (1,9), (4,1), (4,10), (4,14), (5,1), (6,8), (6,10), (6,11), (6,13), (7,14) GO select DISTINCT(a.contentid), a.title, p.productid from articles a JOIN articleproducts ap ON a.contentid = ap.contentid JOIN products p ON a.contentid = ap.contentid AND p.productid = ap.productid ORDER BY a.contentid

    Read the article

  • Aggregate Pattern and Performance Issues

    - by Mosh
    Hello, I have read about the Aggregate Pattern but I'm confused about something here. The pattern states that all the objects belonging to the aggregate should be accessed via the Aggregate Root, and not directly. And I'm assuming that is the reason why they say you should have a single Repository per Aggregate. But I think this adds a noticeable overhead to the application. For example, in a typical Web-based application, what if I want to get an object belonging to an aggregate (which is NOT the aggregate root)? I'll have to call Repository.GetAggregateRootObject(), which loads the aggregate root and all its child objects, and then iterate through the child objects to find the one I'm looking for. In other words, I'm loading lots of data and throwing them out except the particular object I'm looking for. Is there something I'm missing here? PS: I know some of you may suggest that we can improve performance with Lazy Loading. But that's not what I'm asking here... The aggregate pattern requires that all objects belonging to the aggregate be loaded together, so we can enforce business rules.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >