Search Results

Search found 15041 results on 602 pages for 'breaking changes'.

Page 179/602 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • how to create a mootools 1.1 function with array

    - by liz
    i have a series of select lists, that i am using to populate text boxes with ids. so you click a select option and another text box is filled with its id. with just one select/id pair this works fine, but i have multiples, and the only thing that changes is the id of the select and input.. in fact just the ending changes, the inputs all start with featredproductid and the select ids all start with recipesproduct and then both end with the category. i know that listing this over and over for each category is not the way to do it. i think i need to make an array of the categories var cats = ['olive oil', "grains", "pasta"] and then use a forEach function? maybe? here is the clunky code window.addEvent('domready', function() { $('recipesproductoliveoil').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidoliveoil").setProperties({ value: pidselected}); ; }); $('recipesproductgrains').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidgrains").setProperties({ value: pidselected}); ; }); $('recipesproductpasta').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidpasta").setProperties({ value: pidselected}); ; }); $('recipesproductpantry').addEvent('change', function(e){ pidselected = this.options[this.selectedIndex].getProperty('value') ; $("featuredproductidpantry").setProperties({ value: pidselected}); ; }); }); keep in mind this is mootools 1.1 (no i cant update it sorry). i am sure this is kind of basic, something i seem to have wrapping my brain around. but i am quite sure doing it as above is not really good...

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

  • Encapsulating user input of data for a class (C++)

    - by Dr. Monkey
    For an assignment I've made a simple C++ program that uses a superclass (Student) and two subclasses (CourseStudent and ResearchStudent) to store a list of students and print out their details, with different details shown for the two different types of students (using overriding of the display() method from Student). My question is about how the program collects input from the user of things like the student name, ID number, unit and fee information (for a course student) and research information (for research students): My implementation has the prompting for user input and the collecting of that input handled within the classes themselves. The reasoning behind this was that each class knows what kind of input it needs, so it makes sense to me to have it know how to ask for it (given an ostream through which to ask and an istream to collect the input from). My lecturer says that the prompting and input should all be handled in the main program, which seems to me somewhat messier, and would make it trickier to extend the program to handle different types of students. I am considering, as a compromise, to make a helper class that handles the prompting and collection of user input for each type of Student, which could then be called on by the main program. The advantage of this would be that the student classes don't have as much in them (so they're cleaner), but also they can be bundled with the helper classes if the input functionality is required. This also means more classes of Student could be added without having to make major changes to the main program, as long as helper classes are provided for these new classes. Also the helper class could be swapped for an alternative language version without having to make any changes to the class itself. What are the major advantages and disadvantages of the three different options for user input (fully encapsulated, helper class or in the main program)?

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • How can I get the type I want?

    - by Danny Chen
    There are a lot of such classes in my project (very old and stable code, I can't do many changes to them, maybe slight changes are OK) public class MyEntity { public long ID { get; set; } public string Name { get; set; } public decimal Salary { get; set; } public static GetMyEntity ( long ID ) { MyEntity e = new MyEntity(); // load data from DB and bind to this instance return e; } } For some reasons, now I need to do this: Type t = Type.GetType("XXX"); // XXX is one of the above classes' name MethodInfo staticM= t.GetMethods(BindingFlags.Public | BindingFlags.Static).FirstOrDefault();// I'm sure I can get the correct one var o = staticM.Invoke(...); //returns a object, but I want the type above! If I pass "MyEntity" at beginning, I hope I can get o as MyEntity! Please NOTE that I know the "name of the class" only. MyEntity e = staticM.Invoke(...) as MyEntity; can't be used here.

    Read the article

  • PC -> TV HDMI suddenly doesn't work, any other option does

    - by XSlicer
    So my parents wanted to connect their PC to their TV, so they bought a 10m (35 ft) cable. It all worked perfectly. Then my mom tripped over the cable, breaking it. She somehow got a free replacement, but at the same time some more expensive one from an uncle. Now the problem is that this suddenly doesn't work anymore (TV saying "Format not supported"). I've tried several other things to pinpoint the actual problem, but none brought me something conslusive: PC (hdmi) to TV doesn't work (both cables) PC (hdmi) to TV doesn't work (short cable) Laptop (hdmi) to TV work (same cables) BluRay (hdmi) to TV works (same cables) PC (hdmi) to monitor works (same cables) PC (DVI w/ adapter HDMI) to TV works (same cables) I've tried different ports, tried Linux, reinstalling drivers (the graphics card is internal using an i5 sandy bridge. Normally it's running Windows 7 Home Prem, the TV being a Philips 37PFL7605H). So basically, everything works, except that thing what we want. The only things I am thinking of is that the sound is interfering or that the HDMI-output is somehow broken. Or is it anything else? I'm kind of lost.

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Search engine recommendation for 100 sites of about 4000 pages

    - by fwkb
    I am looking for a search engine that can regularly (daily-ish) scan about 100 pages for changes and index an associated site if changes since the last scan are found. It should be able to handle about 100 sites, each averaging 4000 pages of about 5k average size, each on a different server (but only the one centralized search engine). Each of these sites will have a search form that gets submitted to this search engine. The results that are returned must be specific to the site that submitted them. I create the templates for the external sites, so I can give the search form a hidden field that specifies which site the form is submitted from. What would you recommend I look into? I would love to use a Python-based system for this, if feasible. I am currently using something called iSearch2. It doesn't seem very stable at this scale, the description of the product states it is not really intended to do multiple sites, is in PHP (which is less comfortable to me than Python), and has a few other shortcomings for my specific situation.

    Read the article

  • DDD and MVC Models hold ID of separate entity or the entity itself?

    - by Dr. Zim
    If you have an Order that references a customer, does the model include the ID of the customer or a copy of the customer object like a value object (thinking DDD)? I would like to do ths: public class Order { public int ID {get;set;} public Customer customer {get;set;} ... } right now I do this: public class Order { public int ID {get;set;} public int customerID {get;set;} ... } It would be more convenient to include the complete customer object rather than an ID to the View Model passed to the form. Otherwise, I need to figure out how to get the vendor information to the view that the order references by ID. This also implies that the repository understand how to deal with the customer object that it finds within the order object when they call save (if we select the first option). If we select the second option, we will need to know where in the view model to put it. It is certain they will select an existing customer. However, it is also certain they may want to change the information in-place on the display form. One could argue to have the controller extract the customer object, submit customer changes separately to the repository, then submit changes to the order, keeping the customerID in the order.

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • Committed JDO writes do not apply on local GAE HRD, or possibly reused transaction

    - by eeeeaaii
    I'm using JDO 2.3 on app engine. I was using the Master/Slave datastore for local testing and recently switched over to using the HRD datastore for local testing, and parts of my app are breaking (which is to be expected). One part of the app that's breaking is where it sends a lot of writes quickly - that is because of the 1-second limit thing, it's failing with a concurrent modification exception. Okay, so that's also to be expected, so I have the browser retry the writes again later when they fail (maybe not the best hack but I'm just trying to get it working quickly). But a weird thing is happening. Some of the writes which should be succeeding (the ones that DON'T get the concurrent modification exception) are also failing, even though the commit phase completes and the request returns my success code. I can see from the log that the retried requests are working okay, but these other requests that seem to have committed on the first try are, I guess, never "applied." But from what I read about the Apply phase, writing again to that same entity should force the apply... but it doesn't. Code follows. Some things to note: I am attempting to use automatic JDO caching. So this is where JDO uses memcache under the covers. This doesn't actually work unless you wrap everything in a transaction. all the requests are doing is reading a string out of an entity, modifying part of the string, and saving that string back to the entity. If these requests weren't in transactions, you'd of course have the "dirty read" problem. But with transactions, isolation is supposed to be at the level of "serializable" so I don't see what's happening here. the entity being modified is a root entity (not in a group) I have cross-group transactions enabled Another weird thing is happening. If the concurrent modification thing happens, and I subsequently edit more than 5 more entities (this is the max for cross-group transactions), then nothing happens right away, but when I stop and restart the server I get "IllegalArgumentException: operating on too many entity groups in a single transaction". Could it be possible that the PMF is returning the same PersistenceManager every time, or the PM is reusing the same transaction every time? I don't see how I could possibly get the above error otherwise. The code inside the transaction just edits one root entity. I can't think of any other way that GAE would give me the "too many entity groups" error. The relevant code (this is a simplified version) PersistenceManager pm = PMF.getManager(); Transaction tx = pm.currentTransaction(); String responsetext = ""; try { tx.begin(); // I have extra calls to "makePersistent" because I found that relying // on pm.close didn't always write the objects to cache, maybe that // was only a DataNucleus 1.x issue though Key userkey = obtainUserKeyFromCookie(); User u = pm.getObjectById(User.class, userkey); pm.makePersistent(u); // to make sure it gets cached for next time Key mapkey = obtainMapKeyFromQueryString(); // this is NOT a java.util.Map, just FYI Map currentmap = pm.getObjectById(Map.class, mapkey); Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity Text newMapData = parseModifyAndReturn(mapData); // transform the map currentmap.setMapData(newMapData); // mutate the Map object pm.makePersistent(currentmap); // make sure to persist so there is a cache hit tx.commit(); responsetext = "OK"; } catch (JDOCanRetryException jdoe) { // log jdoe responsetext = "RETRY"; } catch (Exception e) { // log e responsetext = "ERROR"; } finally { if (tx.isActive()) { tx.rollback(); } pm.close(); } resp.getWriter().println(responsetext); EDIT: so I have verified that it fails after exactly 5 transactions. Here's what I do: I create a Foo (root entity), do a bunch of concurrent operations on that Foo, and some fail and get retried, and some commit but don't apply (as described above). Then, I start creating more Foos, and do a few operations on those new Foos. If I only create four Foos, stopping and restarting app engine does NOT give me the IllegalArgumentException. However if I create five Foos (which is the limit for cross-group transactions), then when I stop and restart app engine, I do get the exception. So it seems that somehow these new Foos I am creating are counting toward the limit of 5 max entities per transaction, even though they are supposed to be handled by separate transactions. It's as if a transaction is still open and is being reused by the servlet when it handles the new requests for the 2nd through 5th Foos. EDIT2: it looks like the IllegalArgument thing is independent of the other bug. In other words, it always happens when I create five Foos, even if I don't get the concurrent modification exception. I don't know if it's a symptom of the same problem or if it's unrelated. EDIT3: I found out what was causing the (unrelated) IllegalArgumentException, it was a dumb mistake on my part. But the other issue is still happening. EDIT4: added pseudocode for the datastore access EDIT5: I am pretty sure I know why this is happening, but I will still award the bounty to anyone who can confirm it. Basically, I think the problem is that transactions are not really implemented in the local version of the datastore. References: https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/gVMS1dFSpcU https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/deGasFdIO-M https://groups.google.com/forum/?hl=en&fromgroups=#!msg/google-appengine-java/4YuNb6TVD6I/gSttMmHYwo0J Because transactions are not implemented, rollback is essentially a no-op. Therefore, I get a dirty read when two transactions try to modify the record at the same time. In other words, A reads the data and B reads the data at the same time. A attempts to modify the data, and B attempts to modify a different part of the data. A writes to the datastore, then B writes, obliterating A's changes. Then B is "rolled back" by app engine, but since rollbacks are a no-op when running on the local datastore, B's changes stay, and A's do not. Meanwhile, since B is the thread that threw the exception, the client retries B, but does not retry A (since A was supposedly the transaction that succeeded).

    Read the article

  • How to reduce the need for IISRESET for developing ASP.NET web app in IIS 5.1

    - by John Galt
    I have a web application project on my dev PC running WinXP and hence IIS 5.1. The changes I'm making to this site seem to "take effect" only after I do IISRESET. That is, I make a source change, Rebuild the project and then Start without Debugging (or with debugging). The newly changed code is not "visible" or in effect unless I intervene with an IISRESET. BTW, the "web" tab on the Properties display for the web app project is configured to use the Local IIS web server at project Url: http://localhost/myVirtualDirectory ... but I've noticed the same issue when using the VStudio Dev Server (i.e. I have to stop it by visiting the taskbar tray area in order to see my source changes take effect). Is this something I can change? EDIT UPDATE: Just wanting to clear this up if possible. Two answers diverge below; not sure how to move forward. One states this is to be expected (weakness of IIS 5.1 which in turn is the best WinXP can provide). Another states this is not expected behavior (and I tend to agree since this is the first I've encounted this on the same old WinXP dev platform I've had a long time). I suspect it may be something "deep inside" the Visual Studio 2008 web app which was upgraded to this new IDE from VStudio 2002 (ASP.NET 1.1). I've tried to add comment/questions down each answer path. Thanks.

    Read the article

  • routes as explained in RoR tutorial 2nd Ed?

    - by 7stud
    The author, Michael Hartl, says: Here the rule: get "static_pages/home" maps requests for the URI /static_pages/home to the home action in the StaticPages controller. How? The type of request is given, the url is given, but where is the mapping to a controller and action? My tests all pass, though. I also tried deleting all the actions in the StaticPagesController, which just looks like this: class StaticPagesController < ApplicationController def home end def about end def help end def contact end end ...and my tests still pass, which is puzzling. The 2nd edition of the book(online) is really frustrating. Specifically, the section about making changes to the Guardfile is impossible to follow. For instance, if I instruct you to edit this file: blah blah blah dog dog dog beetle beetle beetle jump jump jump and make these changes: blah blah blah . . . go go go . . . jump jump jump ...would you have any idea where the line 'go go go' should be in the code? And the hint for exercise 3.5-1 is flat out wrong. If the author would put up a comment section at the end of every chapter, the rails community could self-edit the book.

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • git commit best practices

    - by Ivan Z. Siu
    I am using git to manage a C++ project. When I am working on the projects, I find it hard to organize the changes into commits when changing things that are related to many places. For example, I may change a class interface in a .h file, which will affect the corresponding .cpp file, and also other files using it. I am not sure whether it is reasonable to put all the stuff into one big commit. Intuitively, I think the commits should be modular, each one of them corresponds to a functional update/change, so that the collaborators could pick things accordingly. But seems that sometimes it is inevitable to include lots of files and changes to make a functional change actually work. Searching did not yield me any good suggestion or tips. Hence I wonder if anyone could give me some best practices when doing commits. Thanks! PS. I've been using git for a while and I know how to interactively add/rebase/split/amend/... What I am asking is the PHILOSOPHY part.

    Read the article

  • Git and cloning

    - by jriff
    Hi all! I have done an app for a client called 'A' (not really). I have found out that it is very cool and that I want to sell it to other clients also. The directory 'A' is a Git repository. I think I have a problem with cloning it. As far as I can see I need to make a copy of the dir 'A' and call it 'Generic_A'. Then delete the dir 'A' and do a "git clone Generic_A A" Then I could start changing the 'Generic_A'-repo with a generic design and all client references removed. But that is kind of the other way around. I should have started doing the generic design and then cloned the repo to change to the client specific design. Can I: make a new branch do all the changes to make the design generic create a patch that reflects the changes between the two remove the client specific branch rename the directory to 'Generic_A' clone the repo to a new dir 'A' apply the patch to get the client specific stuff back And if yes - how do I make the patch and apply it? Regards, Jacob

    Read the article

  • How to get at specific HTML elements of a document using C# and Hide them/Show them etc.

    - by LaserBeak
    Basically I want to load a HTML document and using controls such as multiple check boxes which will be programmed to hide, delete or show HTML elements with certain ID's. So I am thinking I would have to set an inline CSS property for visibility to: false on the ones I want to hide or delete them altogether when necessary. I need this so I don't have to edit my Ebay HTML templates in dreamweaver all the time, where I usually have to scroll around messy code and manually delete or add tags and their respective content. Whereas I just want to create one master template in dreamweaver which has all the variations that my products have, since they are all of the same genre with slight changes here and there and I just need to enable and disable the visibility of these variants as required and copy + paste the final html. I haven's used Windows Forms before, but tried doing this in WebForms which I do know a bit. I am able to get the result that I want by wrapping any HTML elements in a <asp:PlaceHolder></asp:PlaceHolder> and just setting that place holders visibility to false after the associated checkbox is checked and a postback occurs, finally I add a checkbox/button control that removes all the checkboxes, including itself etc for final html. But this method seems just like too much pain in the ass as I have to add the placeholder tags around everything that I need control over as ordinary html elements do not run at server, also webforms injects a bunch of Javascript and ViewState data so I don't have clean HTML which I can just copy after viewing the page source. Any tips/code that you can suggest to achieve the desired effect with the least changes required to existing HTML documents? Ideally I would want to load the HTML document in, have a live design preview of it and underneath have a bunch of well labelled checkboxes programmed to hide, delete or show elements with certain ID's. Thanks...

    Read the article

  • Is it possible to replace values in a queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • Observer pattern and violation of Single Responsibility Principle

    - by Devil Jin
    I have an applet which repaints itself once the text has changed Design 1: //MyApplet.java public class MyApplet extends Applet implements Listener{ private DynamicText text = null; public void init(){ text = new DynamicText("Welcome"); } public void paint(Graphics g){ g.drawString(text.getText(), 50, 30); } //implement Listener update() method public void update(){ repaint(); } } //DynamicText.java public class DynamicText implements Publisher{ // implements Publisher interface methods //notify listeners whenever text changes } Isn't this a violation of Single Responsibility Principle where my Applet not only acts as Applet but also has to do Listener job. Same way DynamicText class not only generates the dynamic text but updates the registered listeners. Design 2: //MyApplet.java public class MyApplet extends Applet{ private AppletListener appLstnr = null; public void init(){ appLstnr = new AppletListener(this); // applet stuff } } // AppletListener.java public class AppletListener implements Listener{ private Applet applet = null; public AppletListener(Applet applet){ this.applet = applet; } public void update(){ this.applet.repaint(); } } // DynamicText public class DynamicText{ private TextPublisher textPblshr = null; public DynamicText(TextPublisher txtPblshr){ this.textPblshr = txtPblshr; } // call textPblshr.notifyListeners whenever text changes } public class TextPublisher implments Publisher{ // implements publisher interface methods } Q1. Is design 1 a SPR violation? Q2. Is composition a better choice here to remove SPR violation as in design 2.

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

  • how to switch views according to command

    - by Veer
    I've a main view and many usercontrols. The main view contains a two column grid, with the first column filled with a listbox whose datatemplate consists of a usercontrol and the second column filled with another usercontrol. These two usercontrols have the same datacontext. MainView: <Grid> //Column defs ... <ListView Grid.Column="0" ItemSource="{Binding Foo}"> ... <DataTemplate> <Views: FooView1 /> </DataTemplate> </ListView> <TextBlock Text="{Binding Foo.Count}" /> <StackPanel Grid.Column="1"> <Views: FooView2 /> </StackPanel> <Grid> FooView1: <UserControl> <TextBlock Text="{Binding Foo.Title}"> </UserControl> FooView2: <UserControl> <TextBlock Text="{Binding Foo.Detail1}"> <TextBlock Text="{Binding Foo.Detail2}"> <TextBlock Text="{Binding Foo.Detail3}"> <TextBlock Text="{Binding Foo.Detail4}"> </UserControl> I've no IDE here. Excuse me if there is any syntax error When the user clicks on a button. These two usercontrols have to be replaced by another two usercontrols, so the datacontext changes, the main ui remaining the same. ie, FooView1 by BarView1 and FooView2 by BarView2 In short i want to bind this view changes in mainview to my command (command from Button) How can i do this? Also tell me if i could merge the usercontrol pairs, so that only one view exists for each viewmodel ie, FooView1 and FooView2 into FooView and so on...

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >