Search Results

Search found 21298 results on 852 pages for 'www mechanize'.

Page 2/852 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Django sitemap intermittent www

    - by Jen Z
    The automatic sitemap for my Django site fluctuates between including the www on urls and leaving it out (I'm aiming to have it in all the time). This has ramifications in google not indexing my pages properly so I'm trying to narrow down what would be causing this issue. I have set PREPEND_WWW = True and my site record in the sites framework is set to include the www e.g. it's set to www.example.com as opposed to example.com. I'm using memcached but pages should expire from the cache after 48 hours so I wouldn't have thought that would be causing the issue? You can see the problem in effect at http://www.livingspaceltd.co.uk/sitemap.xml (refresh the page a few times). My sitemaps setup is fairly prosaic so I'm doubtful that that is the issue, but in case it's something obvious I'm missing here's the code: ***urls.py*** sitemaps = { 'subpages': Subpages_Sitemap, 'standalone_pages': Standalone_Sitemap, 'categories': Categories_Sitemap, } urlpatterns = patterns('', (r'^sitemap\.xml$', 'django.contrib.sitemaps.views.sitemap', {'sitemaps': sitemaps}), ... ***sitemaps.py*** # -*- coding: utf-8 -*- from django_ls.livingspace.models import Page, Category, Standalone_Page, Subpage from django.contrib.sitemaps import Sitemap class Subpages_Sitemap(Sitemap): changefreq = "monthly" priority = 0.4 def items(self): return Subpage.objects.filter(restricted_to__isnull=True) class Standalone_Sitemap(Sitemap): changefreq = "weekly" priority = 1 def items(self): return Standalone_Page.objects.all() class Categories_Sitemap(Sitemap): changefreq = "weekly" priority = 0.7 def items(self): return Category.objects.all()

    Read the article

  • how to fix: www.domain.com redirected to domain.com

    - by cohen
    Hi this website livingalignment.com is very slow to load. The domain and hosting is all with go daddy. In pingdom I found that it is redirecting from www.livingalignment.com to livingalignment.com and it takes about 2 seconds to do so. you can see that here taking about 10 seconds when I entered www.livingalignment.com: http://tools.pingdom.com/fpt/#!/kNZeCxO8r/www.livingalignment.com If I test it and put in livingalignment.com then it takes about 4 seconds: http://tools.pingdom.com/fpt/#!/csgePmsNx/livingalignment.com What should I do to fix this? thanks.

    Read the article

  • python mechanize.browser submit() related problem

    - by paul
    Hello All im making some script with mechanize.browser module. one of problem is all other thing is ok, but when submit() form,it not working, so i was found some suspicion source part. in the html source i was found such like following. <form method="post" onsubmit="return loginCheck(this)" name="FRMLOGIN"/> im thinking, loginCheck(this) making problem when submit form. but how to handle this kind of javascript function with mechanize module ,so i can successfully submit form and can receive result? folloing is my current script source. if anyone can help me ..much appreciate!! # -*- coding: cp949-*- import sys,os import mechanize, urllib import cookielib from BeautifulSoup import BeautifulSoup,BeautifulStoneSoup,Tag import datetime, time, socket import re,sys,os,mechanize,urllib,time br = mechanize.Browser() cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) # Want debugging messages? br.set_debug_http(True) br.set_debug_redirects(True) br.set_debug_responses(True) # User-Agent (this is cheating, ok?) br.addheaders = [('User-agent', 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.6')] br.open('http://user.buddybuddy.co.kr/Login/LoginForm.asp?URL=') html = br.response().read() print html br.select_form(name='FRMLOGIN') print br.viewing_html() br.form['ID']='zero1zero2' br.form['PWD']='012045' br.submit() print br.response().read()

    Read the article

  • Python and mechanize login script

    - by Perun
    Hi fellow programmers! I am trying to write a script to login into my universities "food balance" page using python and the mechanize module... This is the page I am trying to log into: http://www.wcu.edu/11407.asp The website has the following form to login: <FORM method=post action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx><INPUT value=https://cf.wcu.edu/busafrs/catcard/idsearch.cfm type=hidden name=wcuirs_uri> <P><B>WCU ID Number<BR></B><INPUT maxLength=12 size=12 type=password name=id> </P> <P><B>PIN<BR></B><INPUT maxLength=20 type=password name=PIN> </P> <P></P> <P><INPUT value="Request Access" type=submit name=submit> </P></FORM> From this we know that I need to fill in the following fields: 1. name=id 2. name=PIN With the action: action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx This is the script I have written thus far: #!/usr/bin/python2 -W ignore import mechanize, cookielib from time import sleep url = 'http://www.wcu.edu/11407.asp' myId = '11111111111' myPin = '22222222222' # Browser #br = mechanize.Browser() #br = mechanize.Browser(factory=mechanize.DefaultFactory(i_want_broken_xhtml_support=True)) br = mechanize.Browser(factory=mechanize.RobustFactory()) # Use this because of bad html tags in the html... # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) # User-Agent (fake agent to google-chrome linux x86_64) br.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] # The site we will navigate into br.open(url) # Go though all the forms (for debugging only) for f in br.forms(): print f # Select the first (index two) form br.select_form(nr=2) # User credentials br.form['id'] = myId br.form['PIN'] = myPin br.form.action = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' # Login br.submit() # Wait 10 seconds sleep(10) # Save to a file f = file('mycatpage.html', 'w') f.write(br.response().read()) f.close() Now the problem... For some odd reason the page I get back (in mycatpage.html) is the login page and not the expected page that displays my "cat cash balance" and "number of block meals" left... Does anyone have any idea why? Keep in mind that everything is correct with the header files and while the id and pass are not really 111111111 and 222222222, the correct values do work with the website (using a browser...) Thanks in advance EDIT Another script I tried: from urllib import urlopen, urlencode import urllib2 import httplib url = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' myId = 'xxxxxxxx' myPin = 'xxxxxxxx' data = { 'id':myId, 'PIN':myPin, 'submit':'Request Access', 'wcuirs_uri':'https://cf.wcu.edu/busafrs/catcard/idsearch.cfm' } opener = urllib2.build_opener() opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] request = urllib2.Request(url, urlencode(data)) open("mycatpage.html", 'w').write(opener.open(request)) This has the same behavior...

    Read the article

  • Removing "www." from domain name and SEO

    - by TecMan
    We are doing a big redesign work on our website, and at least 50% of the website folders will be moved to new places with different names (i.e. many URL will be changed). Sure, Google will need some time to index new pages and we expect our SERP positions will be not so good as they are now for some time. We also have an old idea to remove www from our domain name. It seems, it's the right time to do these two works together with publishing the website with updated contents. Or is it better from SEO perspective first publish the new contents, and only after some time, when our SERP positions will return to prior results, tell Google that the domain name without www is our preferred domain name?

    Read the article

  • Can python mechanize handle HTTP auth?

    - by Shekhar
    Mechanize (Python) is failing with 401 for me to open http digest URLs. I googled and tried debugging but no success. My code looks like this. import mechanize project = "test" baseurl = "http://trac.somewhere.net" loginurl = "%s/%s/login" % (baseurl, project) b = mechanize.Browser() b.add_password(baseurl, "user", "secret", "some Realm") b.open(loginurl)

    Read the article

  • force all urls to www and force domain to non-www

    - by Digital site
    I was trying to force my domain to redirect without www and could success through this code: .htaccess: RewriteCond %{HTTP_HOST} ^www\.domain\.com [NC] RewriteRule ^(.*) http://domain.com/$1 [R=301,L] however, this code is going to redirect all www to non-www, which is not what I want. I just want to make the main domain from www.mydomain.com to mydomain.com and the rest of the urls should be forced to www. any idea how to add or modify the code so I can achieve that through .htaccess ? Update: Thanks to all. I found out that swf file from piecemaker was corrupted and updated it with new one. so now it is all fine and works on both www and non-www. I'm still curious how to solve this issue anyways using .htaccess. Thanks again.

    Read the article

  • Getting really weird entries in Apache log from Baidu.com?

    - by Undo
    I was looking through my server logs today, and I noticed this: (it's all one row) 118.244.179.250 - - [16/Oct/2013:20:59:25 +0000] "GET http://www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttphttp/www.baidu.comhttphttphttphttphttp/www.baidu.comhttphttphttphttp/www.baidu.comhttphttphttp/www.baidu.comhttphttp/www.baidu.comhttp/www.baidu.com/ HTTP/1.1" 301 4605 "http://www.baidu.com/" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" "GET http://www.baidu.comhttphttphttphttphttp...? Am I doing something wrong? Am I hosting someone else's website without knowing it? Is a guy named baidu trying to drive me crazy?

    Read the article

  • How to give my user permission to add/edit files on local apache server? [duplicate]

    - by Logan
    Possible Duplicate: How to make Apache run as current user I'm setting up my local test server again, and I seem to have forgotten how to successfully set up the LAMP server. I have installed LAMP server via tasksel command and I have configured the /var/www directory according to a guide I've found: After the lamp server installation you will need write permissions to the /var/www directory. Follow these steps to configure permissions. Add your user to the www-data group sudo usermod -a -G www-data <your user name> now add the /var/www folder to the www-data group sudo chgrp -R www-data /var/www now give write permissions to the www-data group sudo chmod -R g+w /var/www So logan user is now part of www-data group and the file/folder permissions look like the output below: logan@computer:/var/www$ ls -lart total 172 -rw-r--r-- 1 www-data www-data 1997 Oct 23 2010 wp-links-opml.php -rw-r--r-- 1 www-data www-data 3177 Nov 1 2010 wp-config-sample.php -rw-r--r-- 1 www-data www-data 3700 Jan 8 2012 wp-trackback.php -rw-r--r-- 1 www-data www-data 271 Jan 8 2012 wp-blog-header.php -rw-r--r-- 1 www-data www-data 395 Jan 8 2012 index.php -rw-r--r-- 1 www-data www-data 3522 Apr 10 2012 wp-comments-post.php -rw-r--r-- 1 www-data www-data 19929 May 6 2012 license.txt -rw-r--r-- 1 www-data www-data 18219 Sep 11 08:27 wp-signup.php -rw-r--r-- 1 www-data www-data 2719 Sep 11 16:11 xmlrpc.php -rw-r--r-- 1 www-data www-data 2718 Sep 23 12:57 wp-cron.php -rw-r--r-- 1 www-data www-data 7723 Sep 25 01:26 wp-mail.php -rw-r--r-- 1 www-data www-data 2408 Oct 26 15:40 wp-load.php -rw-r--r-- 1 www-data www-data 4663 Nov 17 10:11 wp-activate.php -rw-r--r-- 1 www-data www-data 9899 Nov 22 04:52 wp-settings.php -rw-r--r-- 1 www-data www-data 9175 Nov 29 19:57 readme.html -rw-r--r-- 1 www-data www-data 29310 Nov 30 08:40 wp-login.php drwxr-xr-x 14 root root 4096 Dec 24 17:41 .. drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-admin drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-includes -rw-rw-rw- 1 www-data www-data 3448 Dec 26 16:14 wp-config.php drwxrwxr-x 5 www-data www-data 4096 Dec 26 16:14 . drwx------ 6 www-data www-data 4096 Dec 26 16:19 wp-content Things work perfectly at http://localhost, I can view the website fine. The thing with this is that I will be working on a plugin for wordpress and I don't want to deal with separate owners under www directory to create or modify files/folders. When I give my user the ownership of /var/www recursively as logan:www-data I can create/modify files but cannot view the http://localhost. I get a Forbidden error. I'm assuming that this is because of the Apache's configuration? Which one is healthier or easier considering this is just a local test website, configuring apache to give user logan to view website and chmod /var/www logan:logan so that I can create files etc. without any sudo commands; or is it easier to configure user groups to get www-data user to act like my logan user? (Idk how that's possible, maybe putting www-data user under logan group?) Please shed some light to this subject. All I want is to be able to create/modifiy files under my user, and yet to be able to successfully view http://localhost I appreciate the help!

    Read the article

  • '/'var/www/' vs '/home/$USER/public_html'

    - by OrganizedFellow
    I recently started using Ubuntu as a LAMP server. I've come across plenty of tutorials that say to place the files at '/var/www/' and I've also seen others that put them in '/home/$USER/public_html/'. During my testing and figuring stuff out, I was successfully able to view a test site URL from each location. Is one better than the other? I thought that maybe it was just preference. But the more I think about it, the more I want to keep all my work in my Home folder.

    Read the article

  • Created .htaccess file in /var/www to redirect to folder /var/www/foo

    - by Serg
    Context: How can I configure a NameCheap domain to point to an Apache subfolder? Following Devin's answer here I've created a .htaccess file in /var/www and wrote in the following: RewriteEngine On RewriteCond !sergiotapia.me RewriteRule (.*) sergiotapia.me/$1 [QSA] My folder structure is such: /var/www/ /var/www/sergiotapia.me When visiting the URL sergiotapia.me I see the contents of /var/www when I would like to be directly redirected to /var/www/sergiotapia.me Any ideas?

    Read the article

  • WWW::Mechanize trouble with meta refresh from bank login

    - by J Miller
    I am trying to use perl's WWW::Mechanize to login to my bank and pull transaction information. After logging in through a browser to my bank (Wells Fargo), it briefly displays a temporary web page saying something along the lines of "please wait while we verify your identity". After a few seconds it proceeds to the bank's webpage where I can get my bank data. The only difference is that the URL contains several more "GET" parameters appended to the URL of the temporary page, which only had a sessionID parameter. I was able to successfully get WWW::Mechanize to login from the login page, but it gets stuck on the temporary page. There is a <meta http-equiv="Refresh"... tag in the header, so I tried $mech->follow_meta_redirect but it didn't get me past that temporary page either. Any help to get past this would be appreciated. Thanks in advance. Here is the barebones code that gets me stuck at the temporary page: #!/usr/bin/perl -w use strict; use WWW::Mechanize; my $mech = WWW::Mechanize->new(); $mech->agent_alias( 'Linux Mozilla' ); $mech->get( "https://www.wellsfargo.com" ); $mech->submit_form( form_number => 2, fields => { userid => "$userid", password => "$password" }, button => "btnSignon" );

    Read the article

  • .htaccess - redirect non www to www and retain subdomains from redirecting

    - by RhymeGuy
    So, on my main domain 'domain.com' I created several subdomains from cPanel, like 'sub1.domain.com' and 'sub2.domain.com'. Their real location on server is in 'domain.com/sub1' and 'domain.com/sub2'. Now, I want to redirect non www to www with .htaccess and this is what currently what i have: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\.domain\.com [NC] RewriteRule ^(.*) http://www.domain.com/$1 [L,R=301] </IfModule> This works. When somebody enter domain.com it will be redirected to www.domain.com. However when somebody enter sub1.domain.com, he will be redirected to www.domain.com/sub1 - which I don't want, it needs to be in sub1.domain.com. What shall I add in .htaccess file to accomplish this?

    Read the article

  • How can I handle UTF-8 while posting to a vBulletin board with WWW::Mechanize?

    - by MrMirror
    I have a problem with some automating posting to bulletin board... If I send the posting form to the vBulletin board, I get corrupted entities. Feel free to copy-paste the script and try it... It looks like the board's expecting some decoded utf8, but if I send the message decoded the entities are still wrong. #!/usr/bin/perl use strict; use warnings; use WWW::Mechanize; use Digest::MD5 qw(md5_hex); my $mech = WWW::Mechanize->new(); my $base_url = 'http://www.boerse.bz/'; my $username = 'MrMirror'; my $password = 'test'; $mech->get($base_url); print "Login\n"; $mech->form_number(1); $mech->field('vb_login_username' => $username); $mech->field('vb_login_password' => $password); $mech->field('vb_login_md5password' => md5_hex($password)); $mech->field('vb_login_md5password_utf' => md5_hex($password)); $mech->submit(); unless ($mech->content() =~ m!Weiterleitung!gi) { print "No Rediction!\n"; exit; } print "Redict\n"; $mech->get($base_url); unless ($mech->content() =~ m!Logout!gi) { print "Login Failed!\n"; exit; } $mech->get($base_url .'/newthread.php?do=newthread&f=173'); $mech->form_number(3); $mech->field('subject' => 'MrMirror makes some testing ä ö ü ß'); $mech->field('message' => "ä ö ü ß"); ### everything allright here $mech->dump_forms(); ### preview submit, don't wanna spam around ;) $mech->click('preview'); print "\n\n\n---------------------------------------------------------------------\n\n\n"; ### same form, wrong entities :( $mech->dump_forms();

    Read the article

  • How to set the mechanize page encoding?

    - by Juan Medín
    Hi, I'm trying to get a page with an ISO-8859-1 encoding clicking on a link, so the code is similar to this: page_result = page.link_with( :text => 'link_text' ).click So far I get the result with a wrong encoding, so I see characters like: 'T?tulo:' instead of 'Título:' I've tried several approaches, including: Stating the encoding in the first request using the agent like: @page_search = @agent.get( :url => 'http://www.server.com', :headers => { 'Accept-Charset' => 'ISO-8859-1' } ) Stating the encoding for the page itself page_result.encoding = 'ISO-8859-1' But I must be doing something wrong: a simple puts always show the wrong characters. Do you know how to state the encoding? Thanks in advance, Added: Executable example: require 'rubygems' require 'mechanize' WWW::Mechanize::Util::CODE_DIC[:SJIS] = "ISO-8859-1" @agent = WWW::Mechanize.new @page = @agent.get( :url => 'http://www.mcu.es/webISBN/tituloSimpleFilter.do?cache=init&layout=busquedaisbn&language=es', :headers => { 'Accept-Charset' => 'utf-8' } ) puts @page.body

    Read the article

  • Mechanize Submit Form Error: Insufficient items with name '10427'

    - by maneh
    I'm trying to submit a form with Mechanize, I have tried different ways, but the problem persists. Can anyone help me on this. Thank you in advance! This is the form I want to submit: http://www.stpairways.st/ This is the code that I'm using: def stp_airways(url): import re import mechanize br = mechanize.Browser() br.open(url) print br.title() br.select_form(name = "frmbook") br.form['TypeTrajet'] = ["1"] br.form['id_depart'] = ["11967"] br.form['id_arrivee'] = ["10427"] br.form['txtDateAller'] = "5/7/2014" br.form['txtDateRetour'] = "12/7/2014" br.form['TypePassager1u1000r0b1'] = ["1"] br.form['TypePassager2u1000r0b1'] = ["0"] br.form['TypePassager3u1000r0b1'] = ["0"] br.form['CodeIsoDeviseClient'] = ["17,20,23,24,25,26,27,28,29,30,31,33,34,36,37,64,65,67,68,70,73,80,81,95,96,103,147,151,152,159,160,162,169,170TP1TPF"] br.form['CodeIsoDeviseClient'] = ["EUR"] # submit response1 = br.submit() print response1.read()

    Read the article

  • Server unreachable without www

    - by deamon
    My server is unreachable without "www." prefix, even when trying it with ping. The DNS entry looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011010600 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 1.2.3.4 localhost IN A 127.0.0.1 mail IN A 1.2.3.4 www IN A 1.2.3.4 ftp IN CNAME www imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ A DNS record of the same type for another domain on the same server is working with and without "www". And the VirualHost config looks like this: <VirtualHost *:80> ServerName somewhere.com ServerAlias www.somewhere.com ServerSignature Off ... </VirtualHost> Any idea what could be wrong?

    Read the article

  • WWW::Mechanize for Objective C / iPhone?

    - by dan
    Hi, I want to port a python app that uses mechanize for the iPhone. This app needs to login to a webpage and using the site cookie to go to other pages on that site to get some data. With my python app I was using mechanize for automatic cookie management. Is there something similar for Objective C that is portable to the iPhone? Thanks for any help.

    Read the article

  • Mechanize on HTTPS site.

    - by Grzegorz Kazulak
    Have any of you guys/girls have used ruby's Mechanize library on a site that required SSL? The problem I'm experiencing at the minute is that when I try to access such a website the mechanize tries to use standard http protocol which results in endless redirections between http// and https://

    Read the article

  • Mechanize complex form input name

    - by ADAM
    Hi there i am trying to access a form in mechanize with ugly characters in the object name similar to this agent = Mechanize.new page = agent.get('http://domain.com) form = page.forms[0] form.ct600$Main$LastNameTextBox = "whatever" page = agent.submit(form) The problem is the $ in the html name is messing with ruby because Is there another method i could use ie: form.element_by_name("ct600$Main$LastNameTextBox") = "whatever" Unfortunately i cant change the html

    Read the article

  • Ruby Mechanize - Basic Get Failing

    - by hutch
    a = WWW::Mechanize.new { |agent| agent.user_agent_alias = 'Mac Safari' agent.history.max_size=0 } page = a.get('http://livingsocial.com/deals?preferred_city=18') Trying a very basic GET request using mechanize but get a 500, yet when I CURL I have no problems. Is there a problem with including parameters in a get() call? I know I am missing something simple

    Read the article

  • Issue with www to non www redirect

    - by bob
    Hello, I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated!

    Read the article

  • Dynamic Virtual Hosts In Apache with www and non-www subdomains

    - by haukish
    I don't know apache very well and I've got a problem with configure mod_vhost_alias This is my httpd.conf file: UseCanonicalName Off LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon <Directory /var/www/sites/> Options FollowSymLinks AllowOverride All </Directory> <VirtualHost *:80> CustomLog logs/access_log.sites vcommon ServerAlias *.domain.com UseCanonicalName Off VirtualDocumentRoot /var/www/sites/%1/ </VirtualHost> Subdomains work fine without www. but I need to make them work with www too. Here's an example: something.domain.com - site is loading www.something.domain.com - Not Found What should I do?

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • www-data can upload a file but cant move it after the upload action

    - by user70058
    I am currently running Apache and PHP on Ubuntu. I have a page where a user is supposed to upload a profile image. The action on the backend is supposed to work like this: Upload file to user directory -- WORKS! Refer to the uploaded file and create a thumbnail in directory thumbs -- DOES NOT WORK www-data has write access to directory thumbs. My guess is that www-data for some reason does not have proper access to the file that was uploaded. UPLOADED FILE PERMISSIONS -rw-r--r-- 1 www-data www-data 47057 Feb 8 23:24 0181c6e0973eb19cb0d98521a6fe1d9e71cd6daa.jpg THUMBS DIRECTORY PERMISSIONS drwxr-sr-x 2 www-data www-data 4096 Feb 8 23:23 thumbs Im at lost here. I'm new to Ubuntu as well. Any help would be greatly appreciated!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >