Search Results

Search found 37094 results on 1484 pages for 'mathieu page'.

Page 301/1484 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Dynamic Spacer in ReportLab

    - by ptikobj
    I'm automatically generating a PDF-file with Platypus that has dynamic content. This means that it might happen that the length of the text content (which is directly at the bottom of the pdf-file) may vary. However, it might happen that a page break is done in cases where the content is too long. This is because i use a "static" spacer: s = Spacer(width=0, height=23.5*cm) as i always want to have only one page, I somehow need to dynamically set the height of the Spacer, so that it takes the "rest" of the space that is on the page as its height. Now, how do i get the "rest" of height that is left on my page?

    Read the article

  • Asp Dot Net : IHttpModule + m_context.Server.Transfer = session state error

    - by tinky05
    I have an IHttpModule that implements IRequiresSessionState. The session state is at "on" on the page directive and I also added it to the web.config. In the method "OnBeginRequest" in the IHttpModule, I make a Server.Transfer but I get the error : Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. When I access the page directly or with a Response.Redirect, there is no error. Any idea?

    Read the article

  • ASP.NET OutPutCache VaryByParam and VaryByHeader with AJAX

    - by DennyDotNet
    I'm trying to do some caching using VaryByParam AND VaryByHeader. When an AJAX request comes in I return a partial XHTML. When a regular request comes in I send the partial XHTML page with header / footer. I tried to cache the page by doing: [OutputCache( Duration = 5, VaryByParam = "nickname,page", VaryByHeader = "X-Requested-With" )] However this doesn't work... if I do a regular request first then run the AJAX call I get the full cached page instead of the partial and vice-versa. Seems like VaryByHeader is being ignored. Is it because X-Requested-With is omitted on normal requests? Or perhaps it's doing VaryByParam OR VaryByHeader? My obvious way around this is for AJAX requests to call a different method which only returns partial pages, however I'd like to avoid that if possible. I'm using ASP.NET MVC 1.0 with the OutputCacheAttribute.

    Read the article

  • Cannot get new product attribute in grid display

    - by russjman
    I added a new attribute to my products(a boolean "yes/no" field). It is a variable to enable/disable the price from displaying on the product detail page, and grid view. I managed to get it work on the product info page. But on product grid page I cant seem to access that variable. Specifically, the template i am working with is catalog/product/price.phtml. From what i can tell, the price is being displayed by the same group of if-statements on both the product detail page, and grid page. This has me confused because i cant find any code on that template to handle multiple products, just a bunch of nested if statements. this is how im attempting to access this new variable using $_displayPrice. on line 36 of catalog/product/price.html <?php $_product = $this->getProduct(); ?> <?php $_id = $_product->getId() ?> <?php $_displayPrice = $_product->getDisplayPrice() ? "Yes" : "No"; echo $_displayPrice;?> What has me further confused is that when display $_product-getData(), my new variable isn't anywhere among that data. thanks in advance

    Read the article

  • Android WebView seems to ignore "viewport" information on web pages

    - by Evan
    I have a website that is using the viewport META tag to tell mobile browsers how to display content ( ). Viewing the page in the Android browser looks correct (and iPhone, etc). When I load the page into a WebView component in an android Application, the WebView ignores the "VIEWPORT" tag, and renders the page at "full" resolution, which is zoomed-in in this case.

    Read the article

  • Which .NET class does represent the main CONTROLLER class for WebForms ?

    - by Renato Gama
    Hey guys, lately I was studying a bit of Java, when I was taught about a way to implement a controller class, whose resposibility is to redirect the request to an Action, which perfoms a specified work. That was the way I learnt; @Override protected void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { try { String clazz = req.getRequestURI().replaceAll(req.getContextPath() + "/", "").replaceAll(".java", ""); ((Action)Class.forName("com.myProject.actions." + clazz).newInstance()).execute(req, res); } catch (Exception e) { e.printStackTrace(); } } I know that WebForms also works with HANDLERS, which are kind of actions. For example, each .aspx page inherits from a Page object which is a handler for that specified page. What I couldn't figure out is, which class does get request first and translate it to the specified action (page handler)? Is it a WebForms feature(implementation) or its a IIS resposibility? So, which class represent the main controller for WebForms? Thank you very much.

    Read the article

  • Populating fields, input types etc using JSON

    - by Franco
    I have a form that works as follows.. Server request builds XML of the data on server side and sends xml, XSL stylesheet then transforms the XML data into the plain html page distributing the data to the relevant/desired locations of the form on the page. Person can view page and edit the populated form, submit back to DB. I think JSON is more suitable for this from what I have read. The form itself is split into 3 areas, for me this is 3 maps/associative arrays etc each with a name related to the id of an input element etc. The problem for me comes with having the JSON sent to the page, what should I do with it next in order to achieve the same result as I currently get with XML and XSL. Thanks.

    Read the article

  • How should the View pull on the Presenter in the MVP pattern

    - by John Leidegren
    I have a ASP.NET Web Forms application and I'm using some dynamic controls in the view which depend on stuff that the presenter exposes. Is it okay for the view in this case to pull on the presenter for that data? Is there anything I should be extra careful about when considering testability and a loosely coupled design. The page in this case has it's own page-life cycle and the presenter doesn't know about this. However, the page-life cycle dictates that somethings must occur at specific moments in the page-life cycle. This smells like trouble... Any known pit falls?

    Read the article

  • Moss workflow approval using minimal site definition

    - by 78lro
    Hi When using the minimal site definition for moss from codeplex (http://www.codeplex.com/features), after adding the features required for workflow, workflow approval becomes available for the pages library. When I submit a page for approval, the buttons for approve and reject do not appear on the page editing toolbar. I can go and view the workflow approval and approve/reject it but the normal buttons on the page editing toolbar do not appear. Any ideas greatly appreciated.

    Read the article

  • Is this a valid css?

    - by Pandiya Chendur
    I have a pager in my page with anchors in it... I use the following css... .page-numbers a { color:#808185; cursor:pointer; text-decoration:none;outline:none; } .page-numbers a:hover { text-decoration:underline; } .page-numbers a:visited { color:#808185;outline:none; } But my anchor tag doesn't seem to take the css above instead it uses the css below, a { color:#0077CC; cursor:pointer; text-decoration:none;outline:none; } a:hover { text-decoration:underline; } a:visited { color:#4A6B82;outline:none; } Which i have given in the top of my stylesheet... Any suggestion...

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • How can i convert this to a factory/abstract factory?

    - by Amitd
    I'm using MigraDoc to create a pdf document. I have business entities similar to the those used in MigraDoc. public class Page{ public List<PageContent> Content { get; set; } } public abstract class PageContent { public int Width { get; set; } public int Height { get; set; } public Margin Margin { get; set; } } public class Paragraph : PageContent{ public string Text { get; set; } } public class Table : PageContent{ public int Rows { get; set; } public int Columns { get; set; } //.... more } In my business logic, there are rendering classes for each type public interface IPdfRenderer<T> { T Render(MigraDoc.DocumentObjectModel.Section s); } class ParagraphRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Paragraph> { BusinessEntities.PDF.Paragraph paragraph; public ParagraphRenderer(BusinessEntities.PDF.Paragraph p) { paragraph = p; } public MigraDoc.DocumentObjectModel.Paragraph Render(MigraDoc.DocumentObjectModel.Section s) { var paragraph = s.AddParagraph(); // add text from paragraph etc return paragraph; } } public class TableRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Tables.Table> { BusinessEntities.PDF.Table table; public TableRenderer(BusinessEntities.PDF.Table t) { table =t; } public MigraDoc.DocumentObjectModel.Tables.Table Render(Section obj) { var table = obj.AddTable(); //fill table based on table } } I want to create a PDF page as : var document = new Document(); var section = document.AddSection();// section is a page in pdf var page = GetPage(1); // get a page from business classes foreach (var content in page.Content) { //var renderer = createRenderer(content); // // get Renderer based on Business type ?? // renderer.Render(section) } For createRenderer() i can use switch case/dictionary and return type. How can i get/create the renderer generically based on type ? How can I use factory or abstract factory here? Or which design pattern better suits this problem?

    Read the article

  • Dynamic Frame Creation

    - by piluso
    Hi, I have a web page, normal no fuzz or weird stuff. And I want to trap that page inside an html frame when a user clicks on a given link. The thing is i don't want to reload the page. Some kind of dynamic DOM trickery seems the way to go, but to no avail in my tests. If anyone has any ideas it would be great! Thanks

    Read the article

  • Paypal IPN Confirmation Screen immediately after redirect without reload

    - by Email
    Hi I made a script for IPN which works great but how can i immediately notify the user? I mean paypal redirects the customer to a custom-page i can define, simultanously my ipn.php checked the status, BUT how can i immediately tell the customer on this custom-page that it was successful or not. this custom-page does somehow has to know that this is the customer-xyz which made the ipn-verified!!! payment xyz, but how? i think also this custom-page redirect should wait the 5 seconds because php does only process files/request on-loads... so after 5 seconds the ipnscript surely did complete. Sorry if this question is too newby but i dont know how to notify the customer about the (ipn-verified!!!) payment-status immediately. How do you do this? Thanks so much

    Read the article

  • Visual Studio 2010: very slow web applications debugging!

    - by micha12
    I recently installed Visual Studio 2010 (Ultimate edition, final version released in April), and found that debugging a web application became very slow (2-3 times slower than in Visual Studio 2008)! I took the same web application and checked the speed of loading of one of its pages in VS 2008 and VS 2010, and compared the time it takes to load the page. I tested it using 2 approaches: 1) debugging under ASP.NET Development Server (by pressing the "Start" button) and 2) using ASP.NET Development Server without debugging (by using the "View in Browser" menu command). And I got the following results for Visual Studio 2008 and 2010. 1) ASP.NET Development Server withoud debugging ("View in Browser"): the speed of page loading is the same in VS 2008 and 2010. 2) Debugging under ASP.NET Development Server ("Start" button): in VS 2010 the page takes more time to load than in VS 2008 - VS 2010 debugging is 2-3 times slower than in VS 2008! 3) At the same time, when debugging a web application in VS 2008, it takes the same time to load the page compared to when using only the "View in Browser" command. That is, VS 2008 debugging does not introduce any overhead to page loading in the web browser! I wanted to make sure that other people have the same problem with slow debugging of web applications in VS 2010. Can this issue be solved by any means? BTW, I am using Windows XP SP3. Thank you.

    Read the article

  • problem in jquery notify bar on submit form using php or zf

    - by user1400
    hi guys in my application on zend framework i use of 'jQuery Notify Bar' plugin for display messages, i'd like to show message when my form submit ,the other page is opened and notify bar be open until the other page is completely load and even some second more but the problem is that notify bar show for short Moment and when the other page begin to load , notify bar is closed, and the delay property is not effect to that $('#myForm').submit(function(){ $.notifyBar({ html: "Thank you, your settings were updated!", **delay: 20000,** animationSpeed: "normal" }); how to show notify bar in the other page too? thanks

    Read the article

  • Watir not working in Windows 7

    - by Ben Mills
    I recently did a fresh install of Windows 7. I installed Ruby 1.8.6 and Watir via RubyGems. When I try to run a Watir script, IE opens and the first page is called, but the problem seems to be that the script doesn't wait for the page to finish loading (which it's always done in the past). Subsequent lines in the script try to access page elements that haven't loaded yet. Is anyone else having this problem?

    Read the article

  • javascript function

    - by user295189
    can someone explain what this function is doing var page = new Object(); page.testSearch.btnSearch.setState = function() { this.disable(!(page.testSearch.searchString.value.trim().length > 1)); } thanks

    Read the article

  • onclick event not working after ie7 reload

    - by Charles
    I am using Javascript to dynamically create part of my page content. A routine that generates a set of img tags is called from the window.onload event. Those img tags are assigned attributes, including an onclick event. The img tags host thumbnail images that, when clicked, change the src property of the image in the main view div. Everything works properly in FF 3.5. I can reload the page and the dynamically generated onclick events continue to fire as expected. In IE7 everything works normally until I reload the page. At that point events that were hard coded into the xhtml section continue to work as expected, and the dynamically generated img tags are shown on the page, but their onclick events fail to work. How can I get IE7 to implement the dynamically generated click events on reload?

    Read the article

  • Calling Web Services Asynchronously in Page_Load Event

    - by Umar Siddique
    I'm working on a web application using VB.NET. In page load event am calling a remote web service which take time to bring the data. During this process none of the other contents on page are shown(render). I want to call this remote web service asynchronously so that other data of page is displayed and web service data will be displayed when its available.

    Read the article

  • clear cookie container in WebRequest

    - by Jeremy
    I'm using the WebRequest object to post data to a login page, then post data to a seperate page on the same site. I am instantiating a CookieContainer and assigning it to the WebRequest object so that the cookies are handled. The problem is that I do not want to retain the cookie after I post data to the other page. How can I delete that cookie?

    Read the article

  • WebClient Lost my Session

    - by kamiar3001
    Hi folks I have a problem first of all look at my web service method : [WebMethod(), ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string GetPageContent(string VirtualPath) { WebClient client = new WebClient(); string content=string.Empty; client.Encoding = System.Text.Encoding.UTF8; try { if (VirtualPath.IndexOf("______") > 0) content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath.Replace("__", "."))); else content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath)); } catch { content = "Not Found"; } return content; } As you can see my web service method read and buffer page from it's localhost it works and I use it to add some ajax functionality to my web site it works fine but my problem is Client.DownloadString(..) lost all session because my sessions are all null which are related to this page for more description. at page-load in the page which I want to load from my web service I set session : HttpContext.Current.Session[E_ShopData.Constants.SessionKey_ItemList] = result; but when I click a button in the page this session is null I mean it can't transfer session. How I can solve this problem ? my web service is handled by some jquery code like following : $.ajax({ type: "Post", url: "Services/NewE_ShopServices.asmx" + "/" + "GetPageContent", data: "{" + "VirtualPath" + ":" + mp + "}", contentType: "application/json; charset=utf-8", dataType: "json", complete: hideBlocker, success: LoadAjaxSucceeded, async: true, cache: false, error: AjaxFailed });

    Read the article

  • Last node of the treeview selected node changed not working

    - by Domnic
    im using Treeview in my master page...and every node except last node get selected if i click on it and can be redirect to respond treenode page...but when i click last node of the treeview the selectednode changed event doesnt fired it just stop page load event itself(breakpoint)..... how can i solve this problem?

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >