Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 299/375 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • 'Stack level too deep' error in engine-like plugin with globalize

    - by nutsmuggler
    Hello folks. I have built an engine-like plugin thanks to the new features of Rails 2.3. It's a 'Product' module for a CMS, extrapolated from a previously existing (and working) model/controller. The plugin relies on easy_fckeditor and on globalize (description and title field are localised), and I suspect that globalized could be the culprit here... Everything works fine, except for the update action. I get the following error message: (posting just the first lines, all the message is about attribute_methods) stack level too deep /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:64:in `generated_methods?' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:241:in `method_missing' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:249:in `method_missing' For referenze, the full error stack is here: http://pastie.org/596546 I've tried to debug eliminating all the input fields, one by one, but I keep getting the error. fckeditor doesn't seem the culprit (error even without fckeditor) This is the action: def update params[:product][:term_ids] ||= [] @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) flash[:notice] = t(:Product_was_successfully_updated) format.html { redirect_to products_path } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end As you see it's quite straightforward. Of course I am not hoping someone to solve this question straightaway, I'd just like to have a head up, a suggestion about where to look to solve this issue. Thanks in advance, Davide

    Read the article

  • Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION)) in SharePoint Part 2

    - by BeraCim
    Hi all: Following this post which I posted some time ago, I now get the same error every time I try to rewire 2 web's URLs. Basically, this is the code. It runs in a LongRunningOperationJob: SPWeb existingWeb = null; using (existingWeb = site.OpenWeb(wedId)) { SPWeb destinationWeb = createNewSite(existingWeb); existingWeb.AllowUnsafeUpdates = true; existingWeb.Name = existingWeb.Name + "_old"; existingWeb.Title = existingWeb.Title + "_old"; existingWeb.Description = existingWeb.Description + "_old"; existingWeb.Update() existingWeb.AllowUnsafeUpdates = false; destinationWeb.AllowUnsafeUpdates = true; destinationWeb.Name = existingWeb.Name; destinationWeb.Title = existingWeb.Title; destinationWeb.Description = existingWeb.Description; destinationWeb.Update(); destinationWeb.AllowUnsafeUpdates = false; // null this for what its worth existingWeb = null; destinationWeb = null; } // <---- Exception raised here Basically, the code is trying to rename the existing site's URL to something else, and have the destination web's url point to the old site's URL. When I run this for the first time, I received the Exception mentioned in the subject. However, every run after, I do not see the exception anymore. The webs DO get rewired... but at the cost of the app dying an unnecessary and terrible death. I'm at a complete lost as to what is going on and needs urgent help. Does sharepoint keep any hidden table from me or is the logic above has fatal problems? Thanks.

    Read the article

  • Routing problem with calling a new method without an ID

    - by alkaloids
    I'm trying to put together a form_tag that edits several Shift objects. I have the form built properly, and it's passing on the correct parameters. I have verified that the parameters work with updating the objects correctly in the console. However, when I click the submit button, I get the error: ActiveRecord::RecordNotFound in ShiftsController#update_individual Couldn't find Shift without an ID My route for the controller it is calling looks like this looks like this: map.resources :shifts, :collection => { :update_individual => :put } The method in ShiftsController is this: def update_individual Shift.update(params[:shifts].keys, params[:shifts].values) flash[:notice] = "Schedule saved" end The relevant form parts are these: <% form_tag( update_individual_shifts_path ) do %> ... (fields for...) <%= submit_tag "Save" %> <% end %> Why is this not working? If I browse to the url: "http://localhost:3000/shifts/update_individual/5" (or any number that corresponds to an existing shift), I get the proper error about having no parameters set, but when I pass parameters without an ID of some sort, it errors out. How do I make it stop looking for an ID at the end of the URL?

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • Recommended approach to port to ASP.NET MVC

    - by tshao
    I think many of us used to face the same question, what's the best practices to port existing web forms App to MVC. The situation for me is that we'll support both web forms and MVC at the same time. It means, we create new features in MVC, while maintaining legacy pages in web forms, and they're all in a same project. The point is: we want to keep the DRY (do not repeat yourself) principle and reduce duplicate code as much as possible. The ASPX page is not a problem as we only create new features in MVC, but there're still some shared components we want to re-use the both new / legacy pages: Master page UserControl The question here is: Is that possible to create a common master page / usercontrol that could be used in both web forms and MVC? I know that ViewMasterPage inherits from MasterPage and ViewUserControl inherits from UserControl, so it's maybe OK to let both web forms and MVC ASPX page refer to the MVC version. I did some testing and found sometimes it generates errors during the rendering of usercontrols. Any idea / experience you can share with me? Very appreciate to it.

    Read the article

  • PHP: tips/resources/patterns for learning to implement a basic ORM

    - by BoltClock
    I've seen various MVC frameworks as well as standalone ORM frameworks for PHP, as well as other ORM questions here; however, most of the questions ask for existing frameworks to get started with, which is not what I'm looking for. (I have also read this SO question but I'm not sure what to make of it, and the answers are vague.) Instead, I figured I'd learn best by getting my hands dirty and actually writing my own ORM, even a simple one. Except I don't really know how to get started, especially since the code I see in other ORMs is so complicated. With my PHP 5.2.x (this is important) MVC framework I have a basic custom database abstraction layer, that has: Very simple methods like connect($host, $user, $pass, $base), query($sql, $binds), etc Subclasses for each DBMS that it supports A class (and respective subclasses) to represent SQL result sets But does not have: Active Record functionality, which I assume is an ORM thing (correct me if I'm wrong) I've read up a little about ORM, and from my understanding they provide a means to further abstract data models from the database itself by representing data as nothing more than PHP-based classes/objects; again, correct me if I am wrong or have missed out in any way. Still, I'd like some simple tips from anyone else who's dabbled more or less with ORM frameworks. Is there anything else I need to take note of, simple example code for me to refer to, or resources I can read? Thanks a lot in advance!

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • git-svn cannot create a branch to follow SVN branching

    - by Serhiy Yakovyn
    Hello everybody, I'm struggling with the following issue. When I continue fetching revisions from SVN with git svn fetch I'm getting the following error (removed https to be able to post question): *Found possible branch point: somecompany.com/product/trunk = somecompany.com/product/branches/deep/branches/product-001, 72666 Found branch parent: (refs/remotes/deep/branches/product-001) b685b7b92813885fdf 6b8e2663daf884bf504b14 Following parent with do_switch Successfully followed parent error: 'refs/remotes/deep' exists; cannot create 'refs/remotes/deep/branches/product-001' fatal: Cannot lock the ref 'refs/remotes/deep/branches/product-001'. update-ref -m r72667 refs/remotes/deep/branches/product-001 df51920e8f0a53f26507 c2679eb6a9dbad91e0d6: command returned error: 128* This happened because I was fetching revisions using the default filter for SVN branches: [svn-remote "svn"] url = https://somecompany.com/someproduct fetch = trunk:refs/remotes/trunk branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/* Now, I have the line below added, but it's too late: branches = branches/deep/branches/*:refs/remotes/deep/branches/* I have tried to fix this by using git reset to remove all the commits. Actually I can see from the error message that git is trying right thing, but cannot because of the branch remotes/deep being existing. I have tried to search for 2 possible solutions: 1. Remove that branch (remotes/deep), but as it is tracked by git as a remote, I was not able to find any solution for that. 2. Remove the whole history related to that branch. No success too :( Does anybody know how to deal with my issue? Thank you in advance, Serhiy Y

    Read the article

  • phpUnit - mock php extended exception object

    - by awongh
    I'm testing some legacy code that extends the default php exception object. This code prints out a custom HTML error message. I would like to mock this exception object in such a way that when the tested code generates an exception it will just echo the basic message instead of giving me the whole HTML message. I cannot figure out a way to do this. It seems like you can test for explicit exceptions, but you can't change in a general way the behavior of an exception, and you also can't mock up an object that extends a default php functionality. ( can't think of another example of this beyond exceptions... but it would seem to be the case ) I guess the problem is, where would you attach the mocked object?? It seems like you can't interfere with 'throw new' and this is the place that the object method is called.... Or if you could somehow use the existing phpunit exception functionality to change the exception behavior the way you want, in a general way for all your code... but this seems like it would be hacky and bad....

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • CATransaction: Layer Changes But Does Not Animate

    - by macinjosh
    I'm trying to animate part of UI in an iPad app when the user taps a button. I have this code in my action method. It works in the sense that the UI changes how I expect but it does not animate the changes. It simply immediately changes. I must be missing something: - (IBAction)someAction:(id)sender { UIViewController *aViewController = <# Get an existing UIViewController #>; UIView *viewToAnimate = aViewController.view; CALayer *layerToAnimate = viewToAnimate.layer; [CATransaction begin]; [CATransaction setAnimationDuration:1.0f]; CATransform3D rotateTransform = CATransform3DMakeRotation(0.3, 0, 0, 1); CATransform3D scaleTransform = CATransform3DMakeScale(0.10, 0.10, 0.10); CATransform3D positionTransform = CATransform3DMakeTranslation(24, 423, 0); CATransform3D combinedTransform = CATransform3DConcat(rotateTransform, scaleTransform); combinedTransform = CATransform3DConcat(combinedTransform, positionTransform); layerToAnimate.transform = combinedTransform; [CATransaction commit]; // rest of method... } I've tried simplifying the animation to just change the opacity (for example) and it still will not animate. The opacity just changes instantly. That leads me to believe something is not setup properly. Any clues would be helpful!

    Read the article

  • JSP: How can I still get the code on my error page to run, even if I can't display it?

    - by Josh Hinman
    I've defined an error-page in my web.xml: <error-page> <exception-type>java.lang.Exception</exception-type> <location>/error.jsp</location> </error-page> In that error page, I have a custom tag that I created. The tag handler for this tag e-mails me the stacktrace of whatever error occurred. For the most part this works great. Where it doesn't work great is if the output has already begun being sent to the client at the time the error occurs. In that case, we get this: SEVERE: Exception Processing ErrorPage[exceptionType=java.lang.Exception, location=/error.jsp] java.lang.IllegalStateException I believe this error happens because we can't redirect a request to the error page after output has already started. The work-around I've used is to increase the buffer size on particularly large JSP pages. But I'm trying to write a generic error handler that I can apply to existing applications, and I'm not sure it's feasible to go through hundreds of JSP pages making sure their buffers are big enough. Is there a way to still allow my stack trace e-mail code to execute in this case, even if I can't actually display the error page to the client?

    Read the article

  • git-svn guestion about creating local branches

    - by leeed25d
    Is there a way to create a local branch, or modify an existing local branch, in such a way that it cannot be dcommit'ed to the svn repo? Here's a description of the scenario. git checkout -b local.farBranch remotes/farBranch git checkout -b patched.local.farBranch git merge local.patches <work on patched branch && test> <do not commit onto patched.local.farBranch> git checkout local.farBranch git commit -am "some changes" git rebase local.farBranch patched.local.farBranch <another work test cycle> git checkout local.farBranch git commit -am "last changes" git svn dcommit Now, I never want to dcommit patched.local.farBranch (which is tracking remotes/farBranch) because that would put my local patches into the SVN repository. This is not a fatal problem but it is a pain in the keester because the patch has to be removed when the SVN farBranch is eventally (SVN) merged onto the trunk. So what I am looking for is a way to prevent this git checkout patched.local.farBranch git svn dcommit <<== ERROR git checkout local.farBranch git svn dcommit <<== OK

    Read the article

  • AJAX Autosave

    - by antony.trupe
    What's the best javascript library, or plugin or extension to a library, that has implemented autosaving functionality? The specific need is to be able to 'save' a data grid. Think gmail and Google Documents' autosave. I don't want to reinvent the wheel if its already been invented. I'm looking for an existing implementation of the magical autoSave() function. Auto-Saving:pushing to server code that saves to persistent storage, usually a DB. The server code framework is outside the scope of this question. Note that I'm not looking for an Ajax library, but a library/framework a level higher: interacts with the form itself. daemach introduced an implementation on top of jQuery @ http://ideamill.synaptrixgroup.com/?p=3. I'm not convinced it meets the lightweight and well engineered criteria though. Criteria stable, lightweight, well engineered saves onChange and/or onBlur saves no more frequently then a given number of milliseconds handles multiple updates happening at the same time doesn't save if no change has occurred since last save saves to different urls per input class Updates I've stabilized a solution. See my answer below for links.

    Read the article

  • StructureMap and injecting IEnumerable<T>

    - by GiddyUpHorsey
    I'm new to StructureMap and have some existing code that I'm working with that uses StructureMap 2.5.4. There is a class that is constructed using StructureMap that has a constructor that takes IEnumerable<TCar> as a parameter. The registry has the following code. Scan(x => { x.TheCallingAssembly(); x.WithDefaultConventions(); x.AddAllTypesOf<ICar>(); } ); ForRequestedType<IEnumerable<ICar>>().TheDefault.Is.ConstructedBy( x => ObjectFactory.GetAllInstances<ICar>()); I'm writing a unit test and have obtained a nested container off the ObjectFactory and have injected an instance using the Inject method. One of the instances of ICar should receive the injected type in its constructor. However it wasn't working and I tracked that down to the ObjectFactory.GetAllInstances() call which doesn't use my nested container. How can I get this to work? I also read about StructureMap autowiring arrays and IEnumerable instances but I couldn't get it to work. Is there a better way to rewrite the above registry code so that an instance of IEnumerable<TCar> will be created and use the injected type from my nested container?

    Read the article

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • Batch faulting in a to-many relationship for a collection of objects

    - by indragie
    Scenario: Let's say I have an entity called Author that has a to-many relationship called books to the Book entity (inverse relationship author). If I have an existing collection of Author objects, I want to fault in the books relationship for all of them in a single fetch request. Code This is what I've tried so far: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Book"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"author IN %@", authors]; Executing this fetch request does not result in the books relationship of the objects in the authors array being faulted in (inspected via logging). I've also tried doing the fetch request the other way around: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Author"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"SELF IN %@", authors]; fetchRequest.relationshipKeypathsForPrefetching = @[@"books"]; This doesn't fire the faults either. What's the appropriate way of going about doing this?

    Read the article

  • Java application design question

    - by ring bearer
    I have a hobby project, which is basically to maintain 'todo' tasks in the way I like. One task can be described as: public class TodoItem { private String subject; private Date dueBy; private Date startBy; private Priority priority; private String category; private Status status; private String notes; } As you can imagine I would have 1000s of todo items at a given time. What is the best strategy to store a todo item? (currently on an XML file) such that all the items are loaded quickly up on app start up(the application shows kind of a dashboard of all the items at start up)? What is the best way to design its back-end so that it can be ported to Android/or a J2ME based phone? Currently this is done using Java Swing. What should I concentrate on so that it works efficiently on a device where memory is limited? The application throws open a form to enter new todo task. For now, I would like to save the newly added task to my-todos.xml once the user presses "save" button. What are the common ways to append such a change to an existing XML file?(note that I don't want to read the whole file again and then persist)

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • C# WinForm Drawing - how to clear and redraw

    - by StoneHeart
    Here is screen shot of my game. On the left is my problem, seem "old draw" still existing. On the right is what it should be. http://img682.imageshack.us/img682/1058/38995989.jpg drawing code Graphics g = e.Graphics; for (int i = 1; i < 27; i += 1) { for (int j = 0; j < 18; j += 1) { ZPoint zp = zpoints[i, j]; if (zp != null) { g.DrawImage(zp.sprite_index, new Point(zp.x, zp.y)); Image arrow; if (zp.sprite_index == spr_green_zpoint) { arrow = spr_green_arrows[zp.image_index]; } else if (zp.sprite_index == spr_red_zpoint) { arrow = spr_red_arrows[zp.image_index]; } else { arrow = spr_grey_arrows[zp.image_index]; } g.DrawImage(arrow, new Point(zp.x - 4, zp.y - 4)); } } } if (latest_p1 != -1 && latest_p2 != -1) { ZPoint zp = zpoints[latest_p1, latest_p2]; if (zp != null) { g.DrawImage(spr_focus, new Point(zp.x - 6, zp.y - 6)); } }

    Read the article

  • Javascript Rich Display WYSIWYG Component/Methodology

    - by Laramie
    quick back story-- I am working on ASP.Net based template editor that lets authors create text templates using Javascript inserted placeholder tags that will be filled in with dynamic text when the templates are used to display the final results. For example the author might create a template like The word [%12#add] was generated dynamically. The application would eventually replace the tag with a dynamic word down the road (though it's not specifically relevant to this post) The word foo was generated dynmamically. Depending on the circumstances, the template may be created in a text input, textarea or a modified version of the Ajax Control Toolkit HTML Editor. There might be 40 or more of these editable elements on the page, so using lots of stripped down or modified HTML editors would probably bog the page down too much. The problem is that the tags such as [%12#add] are displayed inline with the user text and the result is confusing and aesthetically gross. The goal is parse the contens of the source element and when a tags such as [%12#add] are encountered, display something prettier and less cryptic to the user such as a stylable element or image wherever tags such as [%12#add] occur. The application still needs the template text with the tags on postback. So the user might see The word tag placeholder was generated dynamically. but the original template would still be the value of the text input box The word [%12#add] was generated dynamically. It seems HTML editors like the ACT version and FckEditor accomplish this by rendering their output in an IFrame, but rather than kill myself trying to roll a lighter specialized version myself, I thought I'd ask if anyone knows of an existing free component or approach that has already tackled this. With good reason, I don't think S.O. allows HTML formatting, but the bold "tag placeholder" above would ideally be something like tag placeholder.

    Read the article

  • Dealing with expired session for a partially filled form?

    - by aaronls
    I have a large webform, and would like to prompt the user to login if their session expires, or have them login when they submit the form. It seems that having them login when they submit the form creates alot of challenges because they get redirected to the login page and then the postback data for the original form submission is lost. So I'm thinking about how to prompt them to login asynchrounsly when the session expires. So that they stay on the original form page, have a panel appear telling them the session has expired and they need to login, it submits the login asynchronously, the login panel disapears, and the user is still on the original partially filled form and can submit it. Is this easily doable using the existing ASP.NET Membership controls? When they submit the form will I need to worry about the session key? I mean, I am wondering if the session key the form submits will be the original one from before the session expired which won't match the new one generated after logging in again asynchrounously(I still do not understand the details of how ASP.NET tracks authentication/session IDs). Edit: Yes I am actually concerned about authentication expiration. The user must be authenticated for the submitted data to be considered valid.

    Read the article

  • ASP.NET MVC2 and Browser Caching

    - by Dan
    Hi I have a web application that fetches a lot of content via ajax. For example when a user edits some data, the browser will send the changes using an ajax post and then do an ajax get to get fresh content and replace an existing div on the page with that content. This was working just find with MVC1, but in MVC2 I would get inconsistent results. I've found that MVC1 by default included an Expires item in the response headers set to the current time, but in MVC2 the Expires header is missing. This is a problem with some browsers (IE8) actually using the cached version of the ajax get instead of the fresh version. To deal with the problem I created a simple ActionFilterAttribute that sets the reponse cache to NoCache (see below), which works, but it seems kind of sillly to decorate every controller with this attribute. Is there a global way to set this for every controller? Is this a bug in MVC2 and it really should be setting the expires on every ActionResult/view/page? Don't most MVC programs deal with data entry where stale data is a very bad thing? Thanks Dan public class ResponseNoCachingAttribute : ActionFilterAttribute { public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); filterContext.HttpContext.Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache); } }

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >