Search Results

Search found 2201 results on 89 pages for 'webpage'.

Page 4/89 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Translate a webpage in PHP

    - by Rob
    I'm looking to translate a webpage in PHP 5 so I can save the translation and make it easily accessible via mydomain.com/lang/fr/category/article.html rather than users having to go through google translate. I've found various easy ways to translate text via CURL, however what i'd really like to be able to do is translate an entire webpage but obviously ignore the tags. The problem is that Google Translate messes up all the HTML tags, class names etc Does anyone know of a php class that can translate an entire webpage whilst ignoring the tags? I'm guessing it may be possible via advanced regular expressions or something like that, but i'm not sure. I can't just curl Google's response as i'll have all the extra JS that they put in. Any ideas?

    Read the article

  • Selecting a specific div from a extern webpage using CURL

    - by Paulo
    Hi can anyone help me how to select a specific div from the content of a webpage. Say i want to get the div with id="body" from webpage http://www.test.com/page3.php My current code looks something like this: (not working) //REG EXP. $s_searchFor = '@^/.dont know what to put here..@ui'; //CURL $ch = curl_init(); $timeout = 5; // set to zero for no timeout curl_setopt ($ch, CURLOPT_URL, 'http://www.test.com/page3.php'); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout); if(!preg_match($s_searchFor, $ch)) { $file_contents = curl_exec($ch); } curl_close($ch); // display file echo $file_contents; So i'd like to know how i can use reg expressions to find a specific div and how to unset the rest of the webpage so that $file_content only contains the div.

    Read the article

  • How to diagnose "Internet explorer cannot display the webpage"

    - by Colen
    Our web site is working great for 99.99% of our users, but a few people (all of whom use Internet Explorer) are running into an error. Most pages on the site load fine, but for one specific page (the same page for all affected users), all they get is: Internet explorer cannot display the webpage It doesn't matter whether the page is accessed over http or https - it fails to load either way. Every other page on the site, as far as I can tell, works fine for them. Not only that, the same users can load that specific page fine in Firefox. I've checked the web server logs and I can't find any smoking guns there. The site is running IIS on Windows Server 2003. Is there any way to get IE to give the user more information than just "cannot display the web page"? There's a "More information" button, but all it tells you is to make sure that your DNS servers are working, make sure you're not working offline, etc. :(

    Read the article

  • Windows 8 - IE 10 Metro - How to print or save a webpage

    - by AbhishekGirish
    I'm running Windows 8 Pro. My default browser is Internet Explorer 10 (and hence opens as a Modern Windows 8 app). I want to know how to print or save a webpage. There are no related settings available in the browser. The only option is to select "View on Desktop" and the browser interface familiar to IE 9 opens up on the desktop, through which I could access the above said options and additional settings. I know the Desktop is not going away anytime soon and that its still an important part of Windows. But if Microsoft is pitching for the Modern Windows UI, why would they leave out important options such as print & save from it and force a user to access the "old" Desktop mode for the same? Even Windows 8 RT supports plug-and-play access to Printers & Peripherals. So not being able to Print from a tablet or access the File System is definitely not an answer.

    Read the article

  • Is there a browser extension which can copy a webpage snippet/clip/scnapshot to clipboard

    - by Yuriy Kulikov
    I have to save links to web pages into a google drive document. What I want is to copy simple webpage snippet, similar to the one this G+ extension button does, to clipboard. One picture and page title as a link. I was looking for many days now and I still couldn't find anything which does the same thing. In the end the thing I came up with is to hit the G+ button and copy contents of the popup. I am wondering if anybody knows how can this be done right? Thanks in advance, BR, Yuriy

    Read the article

  • Print/save full webpage as PDF

    - by Oliver
    I need a method to be able to print/save the current full webpage as a PDF. I know it can be done if I download a PDF printer and print to that; but I need it to be done without the user having to do anything other than click a button in a webpage. I can't do it via PHP as the page is all client side content, so I'm guessing an ActiveX component? Any ideas would be greatly appreciated! Many thanks

    Read the article

  • Creating webpage on form submit?

    - by Joachim Mcdonald
    How is it possible to allow a user to create a webpage containing some html, based on their entries in a form? ie. I would want them to be able to input a name and when the button is clicked, a webpage called that name would be created. I imagine that this must be possible in php, but what functions/code would I be using? Thank you!

    Read the article

  • How to parse the table from webpage where there are many webpage.

    - by Harikrishna
    There are many tables in the one webpage from that I want to extract the data from only one table. I am using Html Agility Pack to parse the html table.There are many tables in one webpage but I want to extract the data from only one table. So I will first find that table for which I want to extract the data which I can do.Now problem is once I find that table,what I should do to extract the data from only that table ?

    Read the article

  • OwnCloud RSA certificate configured for SERVER- ISSUE, webpage has a redirect loop

    - by jmituzas
    I had Owncloud running on a server that had died, I remember installing being easy, I have migrated server and Owncloud is one of the last apps to install. Ok Just downloaded and installed the newest version of Owncloud on a Ubuntu 14.04 server with PHP 5.5.9-1, I am trying the manual install. I have tried adding repo and installing from apt-get install owncloud, did not work for me :/, whereis owncloud reported nothing. It's installed but never was able to bring up site. Now for my issue I finished the manual install from .tar.bz2 when it came time to login I receive "This webpage has a redirect loop" , I receive the error from Chrome and Safari web browsers. I can't login at all, with no user, I get the error page. Don't know if it is related or not but here's a look at the owncloud-error.log "RSA certificate configured for "mysite.com" Does NOT include an ID which matches the server name" Installed new ssl cert with CN as my ServerName directive in the vhost config file, same error :/ Re-installed owncloud same issue... Out of ideas. Thanks in advance, jmituzas

    Read the article

  • How to open a help webpage after installation is complete

    - by Kapil
    I have built an installer for my windows app using the VisualStudio 2008 IDE. I also use some custom-actions to do a few extra stuff at the time of installation/uninstallation. What I also want to do is that when the installation is completed, the installer should launch a help webpage or a getting-started page for the users to know how to go ahead with the app. I am doing the following for this: private void CustomInstaller_Committed(object sender, InstallEventArgs e) { System.Windows.Forms.LinkLabel lbl = new System.Windows.Forms.LinkLabel(); lbl.Links.Remove(lbl.Links[0]); lbl.Links.Add(0, lbl.Text.Length, "http://www.mywebsite.com/help.aspx"); ProcessStartInfo pInfo = new ProcessStartInfo(lbl.Links[0].LinkData.ToString()); Process.Start(pInfo); } Also, I set the event handler as such: this.Committed += new InstallEventHandler(CustomInstaller_Committed); This is launching the webpage, but not at the right point in the flow where I would have wanted it. It launches the IE browser even before the user dismisses the installation window (i.e clicks 'Close' in the Installation Completed dialog box). I want the webpage to open only when the user finally dismisses the installation. Any ideas how can I achieve this? Thanks, Kapil

    Read the article

  • Zend/PHP: How to disable webpage behind popup ?

    - by NAVEED
    I am working on zend framework, PHP and jQuery. I am working on popups sometimes. When any popup is open on the screen, we can still clicks links on webpage behind popup which causes some unexpected behaviour. How can I disable a webpage behind popup. I have seen some web application in which when popup appears then webpage behind popup become shady. I have read some tutorial about this. In each tutorial a link is used to open a dialog and an special attribute is added in for modal. But I have a different case I have to open dialog on some condition in action. I check a condition in action after post like this: $form = new Edit_Form( ); $this->view->form = $form; $this->view->form->setAction($this->view->url()); $request = $this->getRequest(); if ( $request->isPost() ) { $values = $request->getParams(); if( $values['edit'] ) { $this->view->openEditBox(); } } Now check in view to see that it should open an edit pop or not: if( $this->openEditBox ){ $jsonOutput ['content'] = '<div class="DialogBox" title="Edit">' . $this->form->render() . '</div>'; echo Zend_Json::encode($jsonOutput); } Any Idea? Thanks

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • A question on webpage representation in Java

    - by Gemma
    Hello there. I've followed a tutorial and came up with the following method to read the webpage content into a CharSequence public static CharSequence getURLContent(URL url) throws IOException { URLConnection conn = url.openConnection(); String encoding = conn.getContentEncoding(); if (encoding == null) { encoding = "ISO-8859-1"; } BufferedReader br = new BufferedReader(new InputStreamReader(conn.getInputStream(),encoding)); StringBuilder sb = new StringBuilder(16384); try { String line; while ((line = br.readLine()) != null) { sb.append(line); sb.append('\n'); } } finally { br.close(); } return sb; } It will return a representation of the webpage specified by the url. However,this representation is hugely different from what I use "view page source" in my Firefox,and since I need to scrape data from the original webpage(some data segement in the original "view page source" file),it will always fail to find required text on this Java representation. Did I go wrong somewhere?I need your advice guys,thanks a lot for helping!

    Read the article

  • Real-time data on webpage with jQuery

    - by Steven Hepting
    I would like a webpage that constantly updates a graph with new data as it arrives. Regularly, all the data you have is passed to the page at the beginning of the request. However, I need the page to be able to update itself with fresh information every few seconds to redraw the graph. Background The webpage will be similar to this http://www.panic.com/blog/2010/03/the-panic-status-board/. The data coming in will temperature values to be graphed measured by an Arduino and saved to the Django database (this part is already complete). Update It sounds as though the solution is to use the jQuery.ajax() function ( http://api.jquery.com/jQuery.ajax/) with a function as the .complete callback that will schedule another request several seconds later to a URL that will return the data in JSON format. How can that method be scheduled? With the .delay() function?

    Read the article

  • Real-time data on webpage with Django and jQuery

    - by Steven Hepting
    I would like a webpage that constantly updates a graph with new data as it arrives. Regularly, all the data you have is passed to a Django view at the beginning of the request. However, I need the page to be able to update itself with fresh information every few seconds to redraw the graph. Background The webpage will be similar to this http://www.panic.com/blog/2010/03/the-panic-status-board/. The data coming in will temperature values to be graphed measured by an Arduino and saved to the Django database (I've already done this part).

    Read the article

  • .NET AxWebBrowser can't open page "Navigation to the webpage was canceled" error

    - by Erdnod
    Oh, I am stuck again on this quiet annoying problem. I am trying to do a little automated browsing using the axWebBrowser component and it worked before. Now all of a sudden I get the following message when I try to browse to a page. Navigation to the webpage was canceled What you can try: Refresh the page. I totally disabled my firewall, ran VS2008 in Administrator mode, I even installed IE8 all to no avail. By the way IE8 doesn't work either. I don't know if the problem is related but all I get when I put a URL into IE8 is this message. Internet Explorer cannot display the webpage I don't so much mind that, I just really need to get my visual studio axWebBrowser working.

    Read the article

  • How to extract a specific input field value from external webpage using Javascript

    - by Tom
    Hi, i get the webpage content from an external website using ajax but now i want a function which extract a specific input field value which is already autofilled. the webpage content is like this: ......... <input name="fullname" id="fullname" size="20" value="David Smith" type="text"> <input name="age" id="age" size="2" value="32" type="text"> <input name="income" id="income" size="20" value="22000$" type="text"> ......... I only want to get the value of fullname, maybe with some javascript regex, or some jQuery dom parser, i know i can do it easily with php but i want it using Javascript or jQuery. Note: this inputs are not hosted in my page, they are just pulled from other website through Ajax. Thanks

    Read the article

  • How to save a complete webpage using the built-in webbrowser in c#

    - by Mike
    Overall I am trying to write out a webpage to PDF. There is a web service that I can use to convert a file to pdf. So what I am trying to do is save out a webpage from the WebBrowser winforms control. I have already tried writing it out the document stream but that just gives me the html of the page and not the images that are used with it. Another way that I looked into, but have not been successful with, is trying to create an image of the WebBrowser document. I found some examples on the web that utilize the DrawToBitmap function but none of them have worked for me. Any assistance would be grateful.

    Read the article

  • "Send it to a friend" button on a webpage

    - by kender
    Hey, How often do we see stuff like "Send this page to a friend" on a webpages? Well, I see them quite often. My question is, how do you guys see it's effectiveness? If I hit a webpage that's interesting, and I think my friend would enjoy it, I can just copy the URL from my browser bar, paste it into the email and press "Send" button. In my opinion, it's usually faster and less mistake-aware then the button/link like this on the webpage. In addition, I'm not really sure what this website does with the emails I enter there - don't they store it and then sell for $1/100 addresses to spammers? My question is - when you design a website, do you put such links on the pages (it's often seen on sites with some news/articles)? Does it even make sense?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >