Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 352/414 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • jQuery replacement for javascript confirm

    - by dcp
    Let's say I want to prompt the user before allowing them to save a record. So let's assume I have the following button defined in the markup: <asp:Button ID="btnSave" runat="server" OnClick="btnSave_Click"></asp:Button> To force a prompt with normal javascript, I could wire the OnClick event for my save button to be something like this (I could do this in Page_Load): btnSave.Attributes.Add("onclick", "return confirm('are you sure you want to save?');"); The confirm call will block until the user actually presses on of the Yes/No buttons, which is the behavior I want. For the jquery dialog that is the equivalent, I tried something like this (see below). But the problem is that unlike javascript confirm(), it's going to get all the way through this function (displayYesNoAlert) and then proceed into my btnSave_OnClick method on the C# side. I need a way to make it "block", until the user presses the Yes or No button, and then return true or false so the btnSave_OnClick will be called or not called depending on the user's answer. Currently, I just gave up and went with javascript's confirm, I just wondered if there was a way to do it. function displayYesNoAlert(msg, closeFunction) { dialogResult = false; // create the dialog if it hasn't been instantiated if (!$("#dialog-modal").dialog('isOpen') !== true) { // add a div to the DOM that will store our message $("<div id=\"dialog-modal\" style='text-align: left;' title='Alert!'>").appendTo("body"); $("#dialog-modal").html(msg).dialog({ resizable: true, modal: true, position: [300, 200], buttons: { 'Yes': function () { dialogResult = true; $(this).dialog("close"); }, 'No': function () { dialogResult = false; $(this).dialog("close"); } }, close: function () { if (closeFunction !== undefined) { closeFunction(); } } }); } $("#dialog-modal").html(msg).dialog('open'); }

    Read the article

  • bool as object vs string as object testing equality

    - by Ray Pendergraph
    I am relatively new to C# and I noticed something interesting today that I guess I have never noticed or perhaps I am missing something. Here is an NUnit test to give an example: object boolean1 = false; object booloan2 = false; Assert.That(boolean1 == booloan2); This unit test fails, but this one passes: object string1 = "string"; object string2 = "string"; Assert.That(string1 == string2); I'm not that surprised in and of itself that the first one fails seeing as boolean1 and boolean2 are different references. But it is troubling to me that the first one fails and the second one passes. I read (on MSDN somewhere) that some magic was done to the String class to facilitate this. I think my question really is why wasn't this behavior replicated in bool? As a note... if the boolean1 and 2 are declared as "bool" then there is no problem. Does anyone know the reason for these differences or why it was implemented that way? Can anyone think of a situation where you would want to reference a bool object for anything except its value?

    Read the article

  • Struts 2 discard cache header

    - by Dewfy
    I have strange discarding behavior of struts2 while setting cache option for my image. I'm trying to put image from db to be cached on client side To render image I use ( http://struts.apache.org/2.x/docs/how-can-we-display-dynamic-or-static-images-that-can-be-provided-as-an-array-of-bytes.html ) where special result type render as follow: public void execute(ActionInvocation invocation) throws Exception { ...//some preparation HttpServletResponse response = ServletActionContext.getResponse(); HttpServletRequest request = ServletActionContext.getRequest(); ServletOutputStream os = response.getOutputStream(); try { byte[] imageBytes = action.getImage(); response.setContentType("image/gif"); response.setContentLength(imageBytes.length); //I want cache up to 10 min Date future = new Date(((new Date()).getTime() + 1000 * 10*60l)); ; response.addDateHeader("Expires", future.getTime()); response.setHeader("Cache-Control", "max-age=" + 10*60 + ""); response.addHeader("cache-Control", "public"); response.setHeader("ETag", request.getRequestURI()); os.write(imageBytes); } catch(Exception e) { response.sendError(HttpServletResponse.SC_NOT_FOUND); } os.flush(); os.close(); } But when image is embedded to page it is always reloaded (Firebug shows code 200), and neither Expires, nor max-age are presented in header Host localhost:9090 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://localhost:9090/web/result?matchId=1 Cookie JSESSIONID=4156BEED69CAB0B84D950932AB9EA1AC; If-None-Match /web/_srv/teamcolor Cache-Control max-age=0 I have no idea why it is dissapered, may be problem in url? It is forms with parameter: http://localhost:9090/web/_srv/teamcolor?loginId=3

    Read the article

  • WCF Custom SOAP Header Issues

    - by WayneC
    I'm trying to implement an endpoint behavior which injects a custom SOAP header into all messages to and from a service. I've gotten pretty close by implementing the approach from the accepted answer of this question: http://stackoverflow.com/questions/986455/wcf-wsdl-soap-header-on-all-operations/995951#995951 After implementing that solution, my custom SOAP header does indeed show up in the WSDL; however, when I try to call the methods on my service, I get the following exception/fault: <ExceptionDetail xmlns="http://schemas.datacontract.org/2004/07/System.ServiceModel" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <HelpLink i:nil="true" /> <InnerException i:nil="true" /> <Message>Index was outside the bounds of the array.</Message> <StackTrace> at System.ServiceModel.Dispatcher.DataContractSerializerOperationFormatter.AddHeadersToMessage(Message message, MessageDescription messageDescription, Object[] parameters, Boolean isRequest) at System.ServiceModel.Dispatcher.OperationFormatter.SerializeReply(MessageVersion messageVersion, Object[] parameters, Object result) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.SerializeOutputs(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)</StackTrace> <Type>System.IndexOutOfRangeException</Type> </ExceptionDetail> Looking in Reflector at the DataContractSerializerOperationFormatter.AddHeadersToMessage method thats throwing the exception, leads me to believe that the following snippet is causing the problem...but I'm not sure why. MessageHeaderDescription description = (MessageHeaderDescription) headerPart.Description; object parameterValue = parameters[description.Index]; I think the last line above is throwing the exception. The parameters variable is from IDispatchFormatter.SerializeReply What's going on?!?!! Any help would be greatly appreciated...

    Read the article

  • What happens to class members when malloc is used instead of new?

    - by Felix
    I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this: Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why. The program: #include<iostream.h> class cls { int x; public: cls() { x=23; } int get_x(){ return x; } }; int main() { cls *p1, *p2; p1=new cls; p2=(cls*)malloc(sizeof(cls)); int x=p1->get_x()+p2->get_x(); cout<<x; return 0; } My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct. The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed. Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this? What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)

    Read the article

  • How to reduce the need for IISRESET for developing ASP.NET web app in IIS 5.1

    - by John Galt
    I have a web application project on my dev PC running WinXP and hence IIS 5.1. The changes I'm making to this site seem to "take effect" only after I do IISRESET. That is, I make a source change, Rebuild the project and then Start without Debugging (or with debugging). The newly changed code is not "visible" or in effect unless I intervene with an IISRESET. BTW, the "web" tab on the Properties display for the web app project is configured to use the Local IIS web server at project Url: http://localhost/myVirtualDirectory ... but I've noticed the same issue when using the VStudio Dev Server (i.e. I have to stop it by visiting the taskbar tray area in order to see my source changes take effect). Is this something I can change? EDIT UPDATE: Just wanting to clear this up if possible. Two answers diverge below; not sure how to move forward. One states this is to be expected (weakness of IIS 5.1 which in turn is the best WinXP can provide). Another states this is not expected behavior (and I tend to agree since this is the first I've encounted this on the same old WinXP dev platform I've had a long time). I suspect it may be something "deep inside" the Visual Studio 2008 web app which was upgraded to this new IDE from VStudio 2002 (ASP.NET 1.1). I've tried to add comment/questions down each answer path. Thanks.

    Read the article

  • How to display a busy message over a wpf screen

    - by dave
    Hey, I have a WPF application based on Prism4. When performing slow operations, I want to show a busy screen. I will have a large number of screens, so I'm trying to build a single solution into the framework rather than adding the busy indicator to each screen. These long running operations run in a background thread. This allows the UI to be updated (good) but does not stop the user from using the UI (bad). What I'd like to do is overlay a control with a spinning dial sort of thing and have that control cover the entire screen (the old HTML trick with DIVs). When the app is busy, the control would display thus block any further interaction as well as showing the spinny thing. To set this up, I thought I could just have my app screen in a canvas along with the spinny thing (with a greater ZIndex) then just make the spinny thing visible as required. This, however, is getting hard. Canvases do not seem well set up for this and I think I might be barking up the wrong tree. I would appreciate any help. Thanks.

    Read the article

  • Do ORMs normally allow circular relations? If so, how would they handle it?

    - by SeanJA
    I was hacking around trying to make a basic orm that has support for the one => one and one => many relationships. I think I succeeded somewhat, but I am curious about how to handle circular relationships. Say you had something like this: user::hasOne('car'); car::hasMany('wheels'); car::property('type'); wheel::hasOne('car'); You could then do this (theoretically): $u = new user(); echo $u->car->wheels[0]->car->wheels[1]->car->wheels[2]->car->wheels[3]->type; #=> "monster truck" Now, I am not sure why you would want to do this. It seems like it wastes a whole pile of memory and time just to get to something that could have been done in a much shorter way. In my small ORM, I now have 4 copies of the wheel class, and 4 copies of the car class in memory, which causes a problem if I update one of them and save it back to the database, the rest get out of date, and could overwrite the changes that were already made. How do other ORMs handle circular references? Do they even allow it? Do they go back up the tree and create a pointer to one of the parents? DO they let the coder shoot themselves in the foot if they are silly enough to go around in circles?

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Setting Nullable Integer to String Containing Nothing yields 0

    - by Brian MacKay
    I've been pulling my hair out over some unexpected behavior from nullable integers. If I set an Integer to Nothing, it becomes Nothing as expected. If I set an Integer? to a String that is Nothing, it becomes 0! Of course I get this whether I explicitly cast the String to Integer? or not. I realize I could work around this pretty easily but I want to know what I'm missing. Dim NullString As String = Nothing Dim NullableInt As Integer? = CType(NullString, Integer?) 'Expected NullableInt to be Nothing, but it's 0! NullableInt = Nothing 'This works -- NullableInt now contains Nothing. How is this EDIT: Previously I had my code up here so without the explicit conversion to 'Integer?' and everyone seemed to be fixated on that. I want to be clear that this is not an issue that would have been caught by Option Strict On -- check out the accepted answer. This is a quirk of the string-to-integer conversion rules which predate nullable types, but still impact them.

    Read the article

  • Unity: Replace registered type with another type at runtime

    - by gehho
    We have a scenario where the user can choose between different hardware at runtime. In the background we have several different hardware classes which all implement an IHardware interface. We would like to use Unity to register the currently selected hardware instance for this interface. However, when the user selects another hardware, this would require us to replace this registration at runtime. The following example might make this clearer: public interface IHardware { // some methods... } public class HardwareA : IHardware { // ... } public class HardwareB : IHardware { // ... } container.RegisterInstance<IHardware>(new HardwareA()); // user selects new hardware somewhere in the configuration... // the following is invalid code, but can it be achieved another way? container.ReplaceInstance<IHardware>(new HardwareB()); Can this behavior be achieved somehow? BTW: I am completely aware that instances which have already been resolved from the container will not be replaced with the new instances, of course. We would take care of that ourselves by forcing them to resolve the instance once again.

    Read the article

  • how to set style through javascript in IE immediately

    - by rezna
    Hi, recently I've encountered a problem with IE. I have a function function() { ShowProgress(); DoSomeWork(); HideProgress(); } where ShowProgress and HideProgress just manipulate the 'display' CSS style using jQuery's css() method. In FF everything is OK, and at the same time I change the display property to block, progress-bar appears. But not in IE. In IE the style is applied, once I leave the function. Which means it's never shown, because at the end of the function I simply hide it. (if I remove the HideProgress line, the progress-bar appears right after finishing executing the function (more precisely, immediately when the calling functions ends - and so there's nothing else going on in IE)). Has anybody encountered this behavior? Is there a way to get IE to apply the style immediately? I've prepared a solution but it would take me some time to implement it. My DoSomeWork() method is doing some AJAX calls, and these are right now synchronous. I assume that making them asynchronous will kind of solve the problem, but I have to redesign the code a bit, so finding a solution just for applying the style immediately would much simplier. Thanks rezna

    Read the article

  • Project builds skipped with Any CPU build platform

    - by JMarsch
    All: We are using Visual Studio 2010, and we have recently upgraded our workstations to Windows 7/64-bit. I have a question: When I create a new solution, it seems to want to use the x86 platform. If I change the solution to "any cpu" and then I add a new project to the solution, the project will not have an "any cpu" build option, and it will be deselected from building (in configuration manager). Something seems wrong here. Here's what I want to have (assuming that it is supported): I want my solutions' platforms to default to "Any CPU" (I believe that means that at JIT time, the assembly will be either x86 or 64-bit, based on the machine that loaded it). When I add a new project to the solution, I want for it to have an "any cpu" solution, and I want for that projec to build by default. (basically, the same behavior that we had in VS 2008 on 32-bit workstations). How do I do that? Is there some additional thing that I need to know now that I am using a 64-bit workstation?

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • I am confused about how to use @SessionAttributes

    - by yusaku
    I am trying to understand architecture of Spring MVC. However, I am completely confused by behavior of @SessionAttributes. Please look at SampleController below , it is handling post method by SuperForm class. In fact, just field of SuperForm class is only binding as I expected. However, After I put @SessionAttributes in Controller, handling method is binding as SubAForm. Can anybody explain me what happened in this binding. ------------------------------------------------------- @Controller @SessionAttributes("form") @RequestMapping(value = "/sample") public class SampleController { @RequestMapping(method = RequestMethod.GET) public String getCreateForm(Model model) { model.addAttribute("form", new SubAForm()); return "sample/input"; } @RequestMapping(method = RequestMethod.POST) public String register(@ModelAttribute("form") SuperForm form, Model model) { return "sample/input"; } } ------------------------------------------------------- public class SuperForm { private Long superId; public Long getSuperId() { return superId; } public void setSuperId(Long superId) { this.superId = superId; } } ------------------------------------------------------- public class SubAForm extends SuperForm { private Long subAId; public Long getSubAId() { return subAId; } public void setSubAId(Long subAId) { this.subAId = subAId; } } ------------------------------------------------------- <form:form modelAttribute="form" method="post"> <fieldset> <legend>SUPER FIELD</legend> <p> SUPER ID:<form:input path="superId" /> </p> </fieldset> <fieldset> <legend>SUB A FIELD</legend> <p> SUB A ID:<form:input path="subAId" /> </p> </fieldset> <p> <input type="submit" value="register" /> </p> </form:form>

    Read the article

  • How might one cope with the ambiguous value produced by GetDllDirectory?

    - by Integer Poet
    GetDllDirectory produces an ambiguous value. When the string this call produces is empty, it means one of the following: nobody has called SetDllDirectory somebody passed NULL to SetDllDirectory somebody passed an empty string to SetDllDirectory The first two cases are equivalent for my purposes, but the third case is a problem. If I want to write save/restore code (call GetDllDirectory to save the "old" value, SetDllDirectory to set a "new" value temporarily, and later SetDllDirectory again to restore the "old" value), I run the risk of reversing some other programmer's intent. If the other programmer intended for the current working directory to be in the DLL search order (in other words, one of the first two bullets is true), and I pass an empty string to SetDllDirectory, I will be taking the current working directory out of the DLL search order, reversing the other programmer's intent. Can anyone suggest an approach to eliminate or work around this ambiguity? P.S. I know having the current working directory in the DLL search order could be interpreted as a security hole. Nevertheless, it is the default behavior, and my code is not in a position to undo that; my code needs to be compatible with the expectations of all potential callers, many of which are large and old and beyond my control.

    Read the article

  • Forwarding keypresses in GTK

    - by dguaraglia
    I'm writing a bit of code for a Gedit plugin. I'm using Python and the interface (obviously) is GTK. So, the issue I'm having is quite simple: I have a search box (a gtk.Entry) and right below I have a results box (a gtk.TreeView). Right after you type something in the search box you are presented a bunch of results, and I would like the user to be able to press the Up/Down keys to select one, Enter to choose it, and be done. Thing is, I can't seem to find a way to forward the Up/Down keypress to the TreeView. Currently I have this piece of code: def __onSearchKeyPress(self, widget, event): """ Forward up and down keys to the tree. """ if event.keyval in [gtk.keysyms.Up, gtk.keysyms.Down]: print "pressed up or down" e = gtk.gdk.Event(gtk.gdk.KEY_PRESS) e.keyval = event.keyval e.window = self.browser.window e.send_event = True self.browser.emit("key-press-event", e) return True I can clearly see I'm receiving the right kind of event, but the event I'm sending gets ignored by the TreeView. Any ideas? Thanks in advance people.

    Read the article

  • MS Office Excel Ribbon - Cannot change/hide Editing group in Home tab

    - by A9S6
    I have a .net addin for Excel. The addin creates the Ribbon UI for Excel 2007 and re-purposes some existing commands such as Cut, Copy, Paste, Sort etc. For Cut, Copy and Paste I am just overriding their OnAction value to call my own procedure when the buttons are clicked. But for Sort, Sort Asc and Sort Desc commands the case is a little different. When either of the Sort, Sort Asc or Sort Desc buttons are clicked, I want to get notified and then call the default functionality. This was possible in Excel 2003 commandsbars by calling the Execute() method on the CommandBarControl. In Excel 2007, there is a ExecuteMso() method to programmatically click a ribbon element but when the OnAction is overridden, this ExecuteMso() method just executes my own procedure and not the default functionality of that button. So I thought that I will HIDE the Sort buttons in the "Editing" group in Home tab and add my own Sort, Sort Asc and Sort Desc buttons to it. The buttons will call into my procedure first from where I will call the default behavior. Now the problem is that I am unable to change/hide the Editing group (idMso="GroupEditing"). Is this built-in group not editable? I can however HIDE the Clipboard and other groups(but can't add buttons to them). <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <customUI xmlns="http://schemas.microsoft.com/office/2006/01/customui"> <ribbon> <tabs> <tab idMso="TabHome"> <group idMso="GroupEditing" visible="false" /> </tab> </tabs> </ribbon> </customUI>

    Read the article

  • How to force inclusion of an object file in a static library when linking into executable?

    - by Brian Bassett
    I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior). The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma: #pragma comment (linker, "/export:_Af") Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp: extern void Af(void); static void (*Af_fp)(void) = &Af; This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?

    Read the article

  • How to use boost::transform_iterator to iterate over modifed std::map values?

    - by Frank
    I have an std::map, and I would like to define an iterator that returns modified values. Typically, a std::map<int,double>::iterator iterates over std::pair<int,double>, and I would like the same behavior, just the double value is multiplied by a constant. I tried it with boost::transform_iterator, but it doesn't compile: #include <map> #include <boost/iterator/transform_iterator.hpp> #include <boost/functional.hpp> typedef std::map<int,double> Map; Map m; m[100] = 2.24; typedef boost::binder2nd< std::multiplies<double> > Function; typedef boost::transform_iterator<Function, Map::value_type*> MultiplyIter; MultiplyIter begin = boost::make_transform_iterator(m.begin(), Function(std::multiplies<double>(), 4)); // now want to similarly create an end iterator // and then iterate over the modified map The error is: error: conversion from 'boost ::transform_iterator< boost::binder2nd<multiplies<double> >, gen_map<int, double>::iterator , boost::use_default, boost::use_default >' to non-scalar type 'boost::transform_iterator< boost::binder2nd<multiplies<double> >, pair<const int, double> * , boost::use_default, boost::use_default >' requested What is gen_map and do I really need it? I adapted the transform_iterator tutorial code from here to write this code ...

    Read the article

  • Using tarantula to test a Rails app

    - by Benjamin Oakes
    I'm using Tarantula to test a Rails app I'm developing. It works pretty well, but I'm getting some strange 404s. After looking into it, Tarantula is following DELETE requests (destroy actions on controllers) throughout my app when it tests. Since Tarantula gets the index action first (and seems to keep a list of unvisited URLs), it eventually tries to follow a link to a resource which it had deleted... and gets a 404. Tarantula is right that the URL doesn't exist anymore (because it deleted the resource itself). However, it's flagging it as an error -- that's hardly the behavior I would expect. I'm basically just using the Rails scaffolding and this problem is happening. How do I prevent Tarantula doing this? (Or, is there a better way of specifying the links?) Updates: Still searching, but I found a relevant thread here: http://github.com/relevance/tarantula/issues#issue/3 Seems to be coming from relying on JS too much, in a way (see also http://thelucid.com/2010/03/15/rails-can-we-please-have-a-delete-action-by-default/)

    Read the article

  • What is the purpose of the Html "no-js" class?

    - by Swader
    I notice that in a lot of template engines, in the HTML5 Boilerplate, in various frameworks and in plain php sites there is the no-js class added onto the html element. Why is this done? Is there some sort of default browser behavior that reacts to this class? Why include it always? Does that not render the class itself obsolete, if there is no no-"no-js" case and html can be addressed directly? Here is an example from the HTML5 Boilerplate index.html: <!--[if lt IE 7 ]> <html lang="en" class="no-js ie6"> <![endif]--> <!--[if IE 7 ]> <html lang="en" class="no-js ie7"> <![endif]--> <!--[if IE 8 ]> <html lang="en" class="no-js ie8"> <![endif]--> <!--[if IE 9 ]> <html lang="en" class="no-js ie9"> <![endif]--> <!--[if (gt IE 9)|!(IE)]><!--> <html lang="en" class="no-js"> <!--<![endif]--> As you can see, the html element will always have this class. Can someone explain why this is done so often?

    Read the article

  • ImageViews sometimes not displaying in FrameLayout activity

    - by Ken
    The top level layout in my activity is a framelayout. I have completed, debugged and tested this app and it works exactly like it should in all respects on my g1 and on various emulators. But on 3.7-inch displays running 2.1+, some imageviews packed in a linearlayout are periodically not visible. I know that they are there because you can touch and drag them with effect in the app. So I assume somehow they have gotten under the SurfaceView that is the main component of the app. This is apparently so even though the SurfaceView is declared in the xml prior to the LinearLayout. However, the ImageViews IN the LinearLayout are added programmatically towards the end of onCreate(). Framelayout stacks everything that is added to it, one on top of the other--the only way you will see more than one child of a frame layout is if they are smaller than the screen and are placed apart from eachother. Oddly, sometimes the imageviews ARE visible--it is random. Anyway, I've been trying to combat this with framelayout.bringChildToFront(View v) on the linearlayout without success. I wonder if anyone has any insight into how the behavior could be random like that, and how I should code these imageviews to keep this from happening, and why the problem appears only to occur on 3.7 vs 3.2 inch screens (as it happens, the two 3.2-inch screens were both htc, so vendor might be factor too). [edit] Actually, I've determined that this is a 2.2 issue, not a screen size (or even vendor) issue. Can't ensure that ImageViews added to a framelayout with a SurfaceView in it will appear on top of the surfaceview. I ran some tests in the respective onDraw() methods and the imageviews are 'visible' (0), and nothing does anything to the alpha of the drawables, which are there as well at ondraw(). [/edit] Any insight would be welcomed. Ken T.

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • Help improve this Javascript code?

    - by Galilyou
    Hello SO, In short, I'm dealing with Telerik's RadTreeView, and I want enable checking all the child nodes if the user checked the parent node. Simple enough! OK here's my code that handles OnClientNodeChecked event of the TreeView: function UpdateAllChildren(nodes, checked) { var i; for (i = 0; i < nodes.get_count(); i++) { var currentNode = nodes.getNode(i); currentNode.set_checked(checked); alert("now processing: " + currentNode.get_text()); if (currentNode.get_nodes().get_count() > 0) { UpdateAllChildren(currentNode.get_nodes(), checked); } } } function ClientNodeChecked(sender, eventArgs) { var node = eventArgs.get_node(); UpdateAllChildren(node.get_nodes(), node.get_checked()); } And here's the TreeView's markup: <telerik:RadTreeView ID="RadTreeView1" runat="server" CheckBoxes="True" OnClientNodeChecked="ClientNodeChecked"></telerik:RadTreeView> The tree contains quite a lot of nodes, and this is causing my targeted browser (ehm, that's IE7) to really slow down while running it. Furthermore IE7 displays an error message asking me to stop the page from running scripts as it's might make my computer not responsive (yeah, scary enough). So what do you guys propose to optimize this code? Thanks in advance

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >