Search Results

Search found 50150 results on 2006 pages for 'page search'.

Page 511/2006 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Problem with CruiseControl.net configuration

    - by Pawel
    Hi I started using ccnet to build my project. This is quite new issue for me so I have some problems. First thing: Why does ccnet copy directory with my project to another directory (ccnet creates new folder named the same as project name included in ccnet.config file and copies to them directory with my project) Second thing: Dashboard page cannot show reports for recent build (When I click on any item in recent build then I get page: "The page Cannot be found" I suppose that page cannot link files with logs. but I don't know how to link it. I create one publisher: <publishers> <xmllogger logDir="c:\Branches" /> Can anyone help me?

    Read the article

  • what happens to, the controls or the iframe or the div, which is

    - by user287745
    hidden:- does it get transferred to the user side? disabled:- does it get transferred to the user side? what i want is, a aspx page will be having many iframes to display different different pages, there will be many div tags to display css formatted information to understand the what i mean by many:- i have to transfer a complete website with 30 aspx pages into one single page! have simply combined every thing resulting in one extremely huge page my concern is the on local host it works loads fast, but when on online server accessed by numerous people for education purposes, the site ( ONE PAGE ) WILL SLOW DOWN terribly to overcome this i thought of using hidden and disable options. can any one help and possible suggest an improved way of achieving the above yes it sounds silly but this is the requirement thank you

    Read the article

  • How can i Put java Scripts at the Bottom

    - by Hemant Kothiyal
    Hi, In my asp.net page, i am refring external javascript file. As per my learning in web, it's recommended that always put inline java script at the bottom of page. There is no information i get about external javascript refrence I want to know that if i am refing external javascript file then where should i write? Inside top of the page bottom after closing tag of

    Read the article

  • How should the View pull on the Presenter in the MVP pattern

    - by John Leidegren
    I have a ASP.NET Web Forms application and I'm using some dynamic controls in the view which depend on stuff that the presenter exposes. Is it okay for the view in this case to pull on the presenter for that data? Is there anything I should be extra careful about when considering testability and a loosely coupled design. The page in this case has it's own page-life cycle and the presenter doesn't know about this. However, the page-life cycle dictates that somethings must occur at specific moments in the page-life cycle. This smells like trouble... Any known pit falls?

    Read the article

  • WebClient Lost my Session

    - by kamiar3001
    Hi folks I have a problem first of all look at my web service method : [WebMethod(), ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string GetPageContent(string VirtualPath) { WebClient client = new WebClient(); string content=string.Empty; client.Encoding = System.Text.Encoding.UTF8; try { if (VirtualPath.IndexOf("______") > 0) content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath.Replace("__", "."))); else content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath)); } catch { content = "Not Found"; } return content; } As you can see my web service method read and buffer page from it's localhost it works and I use it to add some ajax functionality to my web site it works fine but my problem is Client.DownloadString(..) lost all session because my sessions are all null which are related to this page for more description. at page-load in the page which I want to load from my web service I set session : HttpContext.Current.Session[E_ShopData.Constants.SessionKey_ItemList] = result; but when I click a button in the page this session is null I mean it can't transfer session. How I can solve this problem ? my web service is handled by some jquery code like following : $.ajax({ type: "Post", url: "Services/NewE_ShopServices.asmx" + "/" + "GetPageContent", data: "{" + "VirtualPath" + ":" + mp + "}", contentType: "application/json; charset=utf-8", dataType: "json", complete: hideBlocker, success: LoadAjaxSucceeded, async: true, cache: false, error: AjaxFailed });

    Read the article

  • Calling Web Services Asynchronously in Page_Load Event

    - by Umar Siddique
    I'm working on a web application using VB.NET. In page load event am calling a remote web service which take time to bring the data. During this process none of the other contents on page are shown(render). I want to call this remote web service asynchronously so that other data of page is displayed and web service data will be displayed when its available.

    Read the article

  • getting a gridview and dropdownlist to remember their states after navigate away and back button scr

    - by nat
    hi i have a page filling a gridview which is all working fine. the grid is basically the result of a search .. the filters for which are a number of dropdowns and a couple of textboxes the data from the grid and dropdowns are saved in the session and the whole page lives inside an updatepanel when i navigate away from the page (as it happens by clicking a link inside the grid) and then back button to it, all the droppers are back to their unselected values and the grid is nowhere to be seen.. i understand that this is because of the scriptmanager doesnt do 'standard' postbacks so the browser doesnt realise what has happened. however i have set the EnableHistory to true in the scriptmanager is there an easy way to get this to remember without dropping the updatepanel/scriptmanager? also to complicate things further the scriptmanger /updatepanel is actually in a master page. so not really sure how i can get the navigate bits to work in the scriptmanager.. clearly i am a bit confuised and any help that someone could provide would be happily received thanks nat

    Read the article

  • How to query AD to get name email from lan id

    - by Kumar
    I have some code in asp.net ( kindly given by someone else ) to query AD to get user name and email etc. using System.DirectoryServices; using System.DirectoryServices.ActiveDirectory; using ActiveDs; DirectorySearcher search = new DirectorySearcher(new DirectoryEntry(), string.Format("(samaccountname={0})", id)); if (search == null) return id; if (search.FindOne() == null) return id; DirectoryEntry usr = search.FindOne().GetDirectoryEntry(); IADsUser oUsr = (IADsUser)usr.NativeObject; return string.Format("{0} {1}", usr.Properties["givenname"].Value, usr.Properties["sn"].Value); However this requires impersonation with an id that's required to be changed every 2 weeks and then updated in the web.config which is often forgotten Is there any non impersonation code to achieve the same result ? UPDATE - it's a config tool and it looks up name, email id etc. I like the service a/c idea Q - How is it possible to run ( impersonate ) just the AD code with a "service" a/c ? any samples/code ? how do you impersona

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • Trouble defining a variable in PHP?

    - by Jake
    Alright, so a content page uses this: $tab = "Friends"; $title = "User Profile"; include '(the header file, with nav)'; And the header page has the code: if ($tab == "Friends") { echo '<li id="current">'; } else { echo '<li>'; } The problem is, that the if $tab == Friends condition is never activated, and no other variables are carried from the the content page, to the header page. Does anyone know what I'm doing wrong? Update: Alright, the problem seemed to disappear when I used ../scripts/filename.php, and only occurred when I used a full URL? Any ideas why?

    Read the article

  • Watir not working in Windows 7

    - by Ben Mills
    I recently did a fresh install of Windows 7. I installed Ruby 1.8.6 and Watir via RubyGems. When I try to run a Watir script, IE opens and the first page is called, but the problem seems to be that the script doesn't wait for the page to finish loading (which it's always done in the past). Subsequent lines in the script try to access page elements that haven't loaded yet. Is anyone else having this problem?

    Read the article

  • jQuery attach function to 'load' event of an element

    - by Miguel Ping
    Hi, I want to attach a function to a jQuery element that fires whenever the element is added to the page. I've tried the following, but it didn't work: var el = jQuery('<h1>HI HI HI</H1>'); el.one('load', function(e) { window.alert('loaded'); }); jQuery('body').append(el); What I really want to do is to guarantee that another jQuery function that is expecting some #id to be at the page don't fail, so I want to call that function whenever my element is loaded in the page. To clarify, I am passing the el element to another library (in this case it's a movie player but it could be anything else) and I want to know when the el element is being added to the page, whether its my movie player code that it is adding the element or anyting else.

    Read the article

  • Pageview implementation

    - by The Elite Gentleman
    Hi everyone I want to add a pageview feature on my current web application. This page view is based on the count of user viewing the page. It must be unique, i.e. I must not view a person's page 10000 times and record it as 10000 views, just record 1 view instead. My question is, should I base my pageview count on IP address? If not, what is/are the best approach in doing this? I know that if the person has logged in to my system, I can simply use the user id stored in the session and check on the record if the user has/hasn't viewed the page and update accordingly. But for "anonymous" viewers, what is the best approach? Thanks. PS How does Youtube does it?

    Read the article

  • Android WebView seems to ignore "viewport" information on web pages

    - by Evan
    I have a website that is using the viewport META tag to tell mobile browsers how to display content ( ). Viewing the page in the Android browser looks correct (and iPhone, etc). When I load the page into a WebView component in an android Application, the WebView ignores the "VIEWPORT" tag, and renders the page at "full" resolution, which is zoomed-in in this case.

    Read the article

  • Passing value in silverlight

    - by Dilse Naaz
    How can pass a value from one page to another page in silverlight. I have one silver light application which contains two pages, one xaml.cs file and one asmx.cs file. I have one text box in xaml page names Text1. My requirement is that at the time of running, i could pass the textbox value to asmx.cs file. How it will be done? my code in asmx.cs file is public string DataInsert(string emp) { SqlConnection conn = new SqlConnection("Data Source=Nisam\\OFFICESERVERS;Initial Catalog=Employee;Integrated Security=SSPI"); SqlCommand cmd = new SqlCommand(); conn.Open(); cmd.Connection = conn; cmd.CommandText = "Insert into demo Values (@Name)"; cmd.Parameters.AddWithValue("@Name", xxx); cmd.ExecuteNonQuery(); return "Saved"; } the value xxx in code is replaced by the passed value from xaml.cs page. pls help me

    Read the article

  • ASP.NET OutPutCache VaryByParam and VaryByHeader with AJAX

    - by DennyDotNet
    I'm trying to do some caching using VaryByParam AND VaryByHeader. When an AJAX request comes in I return a partial XHTML. When a regular request comes in I send the partial XHTML page with header / footer. I tried to cache the page by doing: [OutputCache( Duration = 5, VaryByParam = "nickname,page", VaryByHeader = "X-Requested-With" )] However this doesn't work... if I do a regular request first then run the AJAX call I get the full cached page instead of the partial and vice-versa. Seems like VaryByHeader is being ignored. Is it because X-Requested-With is omitted on normal requests? Or perhaps it's doing VaryByParam OR VaryByHeader? My obvious way around this is for AJAX requests to call a different method which only returns partial pages, however I'd like to avoid that if possible. I'm using ASP.NET MVC 1.0 with the OutputCacheAttribute.

    Read the article

  • Rails: Extracting the raw url from the request

    - by pankajbhageria
    I am working with Rails 2.2. The required behaviour is as follows: I have a link(with a ajax link embedded) xyz.com/admin#page1 When I go to the above page, I should be redirected to the login page, if I am not logged in. After I log in, I should be taken back to xyz.com/admin#page1 For this I need to store the url in session when I visit any page. The problem is that when I do request.uri, I get xyz.com/admin But I want to store xyz.com/admin#page1 Regards, Pankaj

    Read the article

  • How to make .NET WebForm Routing work with Authorization

    - by jakmas
    I have routes that are being registered from the database into an asp.net website (non MVC). The routes register fine, they all work when I am logged in. What I am trying to do is create a landing page based on some route data: Page is [site]/landing/dell The route looks like: "landing/{client}" and it routes to my page Login.aspx, in there I get the client out of the route, then display some custom brand data based on the value. In my web.config, I have my authentication mode set to forms, with my loginUrl = "Login.aspx" When the user does not have the authorization cookie, it redirects the user to: [site]/Login.aspx?ReturnUrl=%2flanding%2fdell instead of keeping the route url, and displaying the correct data. The IIS server actually does not even process the route at all, just sends the user to the Login.aspx page. I have tried several additions to my web.config: etc, and many variations, but nothing seems to work. Ideas anyone? I assume this is a common issue, and it is just not well documented.

    Read the article

  • How do I link a Navigation Menu to an already existing Sitemap

    - by JMK
    I am just beginning my journey into web development and I have a very basic question, but I am none the less stumped. I have setup a new ASP.NET Empty Web Application. In this application, I have created a few *.aspx pages and a sitemap called 'Web.sitemap'. I have placed a SiteMapPath control onto my Master page and, with no further configuration, this detected my Web.sitemap and displays the location of the page on any *.aspx page which derives from the master page. However, whenever I add a Navigation Menu, this doesn't happen. When I bring up the Menu Tasks dialogue box, I can't select this from the Choose Data Source dropdown, my only option is to choose <New data source...> which brings up the Data Source Configuration Wizard, and from this I can create a new Site Map, however I want to use the already existing one. How do I go about this? Thanks

    Read the article

  • Pylons and Facebook

    - by Nayan Jain
    The following is my oauth template top.location.href='https://graph.facebook.com/oauth/authorize?client_id=${config['facebook.appid']}&redirect_uri=${config['facebook.callbackurl']}&display=page&scope=publish_stream'; Click here to authorize this application When I hit the page I am prompted to login (desired), upon login I am redirected in a loop between a permissions page and an app page. My controller looks like: class RootController(BaseController): def __before__(self): tmpl_context.user = None if request.params.has_key('session'): access_token = simplejson.loads(request.params['session'])['access_token'] graph = facebook.GraphAPI(access_token) tmpl_context.user = graph.get_object("me") def index(self): if not tmpl_context.user: return render('/oauth_redirect.mako') return render('/index.mako') I'm guessing my settings are off somewhere, probably with the callback. Not to sure if it is an issue with my code or the python sdk for facebook.

    Read the article

  • Can't get message body of certain emails from inbox using the Zend framework?

    - by Ali
    Hi guys I'm trying to read through an email inbox for my application - I'm using the zend framework here. The problem is that I'm unable to retrieve the message body for certain emails. The following is my code as to how I'm doing this: $mail = new Zend_Mail_Storage_Imap($mail_options); $all_messages = array(); $page = isset($_GET['page'])?$_GET['page']:1; $limit = isset($_GET['limit'])?$_GET['limit']:20; $offset = (($page-1)*$limit)+1; $end = ($page*$limit)>$c?$c:($page*$limit); for ($i=$offset;$i<=$end;$i++){ $h2t = new html2text(); $h2t->set_allowed_tags('<a>'); if(!$mail[$i]) break; else{ $one_message = $mail->getMessage($i); $one_message->id = $i; $one_message->UID = $mail->getUniqueId($i); $one_message->parts = array(); $one_message->body = ''; $count = 1; foreach (new RecursiveIteratorIterator($mail->getMessage($i)) as $ii=>$part) { try { $tpart = $part; //$tpart->_content = ''; $one_message->parts[$count] = $tpart; $count++; // check for html body if (strtok($part->contentType, ';') == 'text/html') { $b = $part->getContent(); if($part->contentTransferEncoding == 'quoted-printable') $b = quoted_printable_decode($b); $one_message->html_body = $b; $h2t->set_html($b); $one_message->body = $h2t->get_text(); } //check for text body if (strtok($part->contentType, ';') == 'text/plain') { $b = $part->getContent(); if($part->contentTransferEncoding == 'quoted-printable') $b = quoted_printable_decode($b); $one_message->text_body = $b; $one_message->body = $b;//$part->getContent(); } } catch (Zend_Mail_Exception $e) { // ignore } } $all_messages[] = $one_message; } } The problem is that randomly some messages don't return even a textbody or an html body. Even though if I check using a webvmail client those emails have a message body as well. WHat am I missing here?

    Read the article

  • Is this a valid css?

    - by Pandiya Chendur
    I have a pager in my page with anchors in it... I use the following css... .page-numbers a { color:#808185; cursor:pointer; text-decoration:none;outline:none; } .page-numbers a:hover { text-decoration:underline; } .page-numbers a:visited { color:#808185;outline:none; } But my anchor tag doesn't seem to take the css above instead it uses the css below, a { color:#0077CC; cursor:pointer; text-decoration:none;outline:none; } a:hover { text-decoration:underline; } a:visited { color:#4A6B82;outline:none; } Which i have given in the top of my stylesheet... Any suggestion...

    Read the article

  • PDF Report generation

    - by IniTech
    EDIT : I completed this project using ABCpdf. For anyone interested, I love this product and their support is A+. Everything I listed as a 'Con' for the HTML - PDF solution was easily doable in ABCpdf. I've been charged with creating a data driven pdf report. After reviewing the plethora of options, I have narrowed it down to 2. I need you all to to help me decide, or offer alternatives I haven't considered. Here are the requirements: 100% Data driven Eventually PDF (a stop in HTML is fine, so long as it is converted) Can be run with multiple sets of data (the layout is always the same, the data is variable) Contains normal analysis-style copy (saved in DB with html markup) Contains tables (data for tables is generated at run-time) Header/Page # on each page Table of Contents .NET (VB or C#) Done quickly Now, because of the fact that the report is going to be generated with multiple sets of data, I don't think a stamped pdf template will work since I won't know how long or how many pages a certain piece of the report could require. So, I think my best options are: Programmatic creation using an iText-like solution. Generate in HTML and convert to PDF using a third-party application (ABCPdf is the tool I have played with so far) Both solutions have their pro's and con's. Programmatic solution: Pros: Flexible Easy page numbering/page header/table of contents Free Cons: Time consuming (to write a layer on top of iText to do what I need and keep maintainable) Since the copy is already stored in the db with html markup, I would have to parse through the data before I place it into the pdf, ensuring I don't have to break the paragraph into chunks so I can apply bold, italic, underline, etc. to specific phrases. This seems like a huge PITA, and I hope I am wrong about that assumption. HTML - PDF Pros: Easy to generate from db (no parsing necessary) Many tools for conversion Uses technology I am already familiar with Built-in "Print Preview" - not a req, but nice Cons: (Edited after project completion. All of my assumptions were incorrect and ABCpdf is awesome) 1. Almost impossible to generate page headers - Not True 2. Very difficult to generate page numbers Not True 3. Nearly impossible to generate table of contents Not True 4. (Cross-browser support isn't a con; Since its internal, I can dictate what browser to use) 5. Conversion tool quirks - may not convert exactly as rendered in browser Not True 6. Overall, I think it would be very hard to format the HTML exactly as I would want it to appear/convert to PDF. Not True That's it - I need the communitys help in deciding which way I should go. I might be wrong about some of my Pro/Con assumptions. If I am, please tell me. All thoughts and suggestions are welcome and appreciated. Thanks

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >