Search Results

Search found 382 results on 16 pages for 'leachianus gecko'.

Page 14/16 | < Previous Page | 10 11 12 13 14 15 16  | Next Page >

  • PHP- Curl Download Bandwidth Limiting

    - by snikolov
    Hey guys, I have been trying to limit the bandwidth with PHP. Can you please help here? I can't get the download rate to be limited with PHP. Thanks a million! function total_filesize($url) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "$url"); curl_setopt($ch, CURLINFO_SPEED_DOWNLOAD,12); //ITS NOT WORKING! curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.11) ". "Gecko/20071127 Firefox/2.0.0.11"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_NOBODY, true); $chStore = curl_exec($ch); $chError = curl_error($ch); $chInfo = curl_getinfo($ch); curl_close($ch); return $size = $chInfo['download_content_length']; } function __define_url($url) { $basename = basename($url); Define('filename',$basename); $define_file_size = total_filesize($url); Define('filesizes',$define_file_size); } function _download_file($url_file) { __define_url($url_file); // $range = "50000-60000"; $filesize = filesizes; $file = filename; header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$file.'"'); header('Content-Transfer-Encoding: binary'); header("Content-Length: $filesize"); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,"$url_file"); curl_setopt($ch, CURLOPT_BINARYTRANSFER,1); // curl_setopt($ch, CURLOPT_RANGE,$range); curl_exec($ch); curl_close($ch); } _download_file('http://rarlabs.com/rar/wrar393.exe');

    Read the article

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • [c++] upload image to imageshack

    - by cinek1lol
    Hi. I would like to send pictures via a program written in C + + I wrote such a thing using curl.exe WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); This only works that I would like to send pictures to a variable pre-loaded char (you know what I mean? first reads the pictures into a variable and then send that variable), because now I have to specify the path to the images on disk I wanted to make this program was written in C + + using the curl library, and not the exe. I found it such a program (which some have modified) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; } I will be grateful for any help

    Read the article

  • Ajax gets nothing back from the php.

    - by ShaMun
    Jquery i dont have alert and firefox i dont have anything in return. The code was working before, database query have successfull records also. What i am missing??? Jquery ajax. $.ajax({ type : "POST", url : "include/add_edit_del.php?model=teksten_display", data : "oper=search&ids=" + _id , dataType: "json", success : function(msg){ alert(msg); } }); PHP case 'teksten_display': $id = $_REQUEST['ids']; $res = $_dclass-_query_sql( "select a,b,id,wat,c,d from tb1 where id='" . $id . "'" ); $_rows = array(); while ( $rows = mysql_fetch_array ($res) ) { $_rows = $rows; } //header('Cache-Control: no-cache, must-revalidate'); //header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Content-type: application/json'); echo utf8_encode( json_encode($_rows) ) ; //echo json_encode($_rows); //var_dump($_rows); //print_r ($res); break; Firefox response/request header Date Sat, 24 Apr 2010 22:34:55 GMT Server Apache/2.2.3 (CentOS) X-Powered-By PHP/5.1.6 Expires Thu, 19 Nov 1981 08:52:00 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Content-Length 0 Connection close Content-Type application/json Host www.xxxx.be User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-2.fc12 Firefox/3.5.9 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://www.xxxx.be/xxxxx Content-Length 17 Cookie csdb=2; codb=5; csdbb=1; codca=1.4; csdca=3; PHPSESSID=benunvkpecqh3pmd8oep5b55t7; CAKEPHP=3t7hrlc89emvg1hfsc45gs2bl2

    Read the article

  • NSUrlconnection problem receiving data from some filehosts

    - by Tammo
    hello again, i am trying to develop an downloadmanager. i can now download files from almost anywhere on linkclick. in the - (BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType i check if the url is a url to a binaryfile like a zipfile. than i setup a nsurlconnection NSMutableURLRequest *urlRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:20.0]; [urlRequest setValue:@"User-Agent" forHTTPHeaderField:@"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko) Safari/419.3"]; NSURLConnection *mainConnection = [NSURLConnection connectionWithRequest:urlRequest delegate:self]; if (nil == mainConnection) { NSLog(@"Could not create the NSURLConnection object"); } (void)connection:(NSURLConnection )connection didReceiveResponse:(NSURLResponse)response { self.tabBarController.selectedIndex=1; [receivedData setLength:0]; percent = 0; localFilename = [[[url2 absoluteString] lastPathComponent] copy]; NSLog(localFilename); NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString *appFile = [documentsDirectory stringByAppendingPathComponent:localFilename]; [[NSFileManager defaultManager] createFileAtPath:appFile contents:nil attributes:nil]; [downloadname setHidden:NO]; [downloadname setText:localFilename]; expectedBytes = [response expectedContentLength]; exp = [response expectedContentLength]; NSLog(@"content-length: %lli Bytes", expectedBytes); file = [[NSFileHandle fileHandleForUpdatingAtPath:appFile] retain]; if (file) { [file seekToEndOfFile]; } } (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { if (file) { [file seekToEndOfFile]; } [file writeData:data]; [receivedData appendData:data]; long long resourceLength = [receivedData length]; float res = [receivedData length]; percent = res/exp; [progress setHidden:NO]; [progress setProgress:percent]; NSLog(@"Remaining: %lli KB", (expectedBytes-resourceLength)/1024); [kbleft setHidden:NO]; [kbleft setText:[NSString stringWithFormat:@"%lli / %lli KB", expectedBytes/1024 ,(resourceLength)/1024]]; } in the connectiondidfinish loading i close the file. all working fine for nearly every hoster except hosters wich have a capture procedure before like filedude.com in the uiwebview i can surf to the downloadpage enter the captcha and get the downloadlink. when i click on it the file will be created in the documentsdir with the filename and the download starts but he dont get any data. every file has 0kb and the NSLog(@"content-length: %lli Bytes", expectedBytes); gives out something like 100-400 byte . can somebody help me solve this problem? kind regards

    Read the article

  • Sending a file from memory (rather than disk) over HTTP using libcurl

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

  • DNS lookup failures while accessing my website some proxy error

    - by Bond
    Here is a situation until today morning,every thing has been working perfectly fine with me. From past 6 months many of my domains wer accessible as http://site1.myserver.com http://site2.myserver.com http://site3.myserver.com http://site4.myserver.com All these were Reverse Proxy configurations. I have some applications on each of them. until today morning some people reported me that http://site1.myserver.com/app1 is not working but http://site1.myserver.com is accessible but http://site2.myserver.com is accessible but http://site3.myserver.com is accessible but http://site4.myserver.com not accessible In past 6 months I have not changed any of these Apache configurations (things were working perfectly so) The error which can be seen in browser are while accessing http://site1.myserver.com/app1 Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /app1. Reason: DNS lookup failure for: myserver.com and same is the error for http://site4.myserver.com So what should I check in I have checked all the apache logs to an extent which I could see and 192.168.1.25 - - [10/Jan/2011:14:50:48 +0530] "GET /app1 HTTP/1.1" 502 531 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" Mon Jan 10 14:27:42 2011] [error] (113)No route to host: proxy: HTTP: attempt to connect to 192.168.1.3:80 (192.168.1.3) failed [Mon Jan 10 14:27:42 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:46 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:47 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:35:29 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:48 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 and for site4.myserver.com I get [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:43 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:02:38 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:10 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:06:21 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:06:31 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:26:03 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /

    Read the article

  • mono 3.0.2 + xsp + lighttpd delivers empty page

    - by Nefal Warnets
    I needed MVC 4 (and basic .NET 4.5) support so I downloaded mono 3.0.2 and deployed it on an lighttpd 1.4.28 installation, together with xsp-2.10.2 (was the latest I could find). After going through the config tutorials I managed to get the fastcgi server to spawn, but all pages are served empty. even if I go to nonexistant urls or direct .aspx files I get an empty HTTP 200 response. The log file on Debug shows nothing suspicious. Here is the log: [2012-12-12 15:15:38Z] Debug Accepting an incoming connection. [2012-12-12 15:15:38Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_SOFTWARE = lighttpd/1.4.28) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_NAME = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PORT = 80) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_PORT = xxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_NAME = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (PATH_INFO = ) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_FILENAME = /data/htdocs/ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (DOCUMENT_ROOT = /data/htdocs) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_URI = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (QUERY_STRING = ) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-12-12 15:15:38Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_HOST = xxxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CACHE_CONTROL = max-age=0) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip,deflate,sdch) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-US,en;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3) [2012-12-12 15:15:38Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) lighttpd config: server.modules += ( "mod_fastcgi" ) include "conf.d/mono.conf" $HTTP["host"] !~ "^vdn\." { $HTTP["url"] !~ "\.(jpg|gif|png|js|css|swf|ico|jpeg|mp4|flv|zip|7z|rar|psd|pdf|html|htm)$" { fastcgi.server += ( "" => (( "socket" => mono_shared_dir + "fastcgi-mono-server", "bin-path" => mono_fastcgi_server, "bin-environment" => ( "PATH" => mono_dir + "bin:/bin:/usr/bin:", "LD_LIBRARY_PATH" => mono_dir + "lib:", "MONO_SHARED_DIR" => mono_shared_dir, "MONO_FCGI_LOGLEVELS" => "Debug", "MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log", "MONO_FCGI_ROOT" => mono_fcgi_root, "MONO_FCGI_APPLICATIONS" => mono_fcgi_applications ), "max-procs" => 1, "check-local" => "disable" )) ) } } the referenced mono.conf index-file.names += ( "index.aspx", "default.aspx" ) var.mono_dir = "/usr/" var.mono_shared_dir = "/tmp/" var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server4" var.mono_fcgi_root = server.document-root var.mono_fcgi_applications = "/:." The document root for this server is /data/htdocs. The asp.net files reside there. lighttpd error logs show nothing. Every help is greatly appreciated!

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • How do I troubleshoot a "Bad Request" in Apache2?

    - by Nick
    I have a PHP application that loads for all URLs except the home page. Visiting "https://my.site.com/" produces a "Bad Request" error message. Any other URL, for example, "https://my.site.com/SomePage/" works just fine. It's only the home page that does not work. All pages use mod_rewrite and get routed through a single dispatch script, Director.php. Accessing Director.php directly also produces the "Bad Request" error. BUT- ALL of the other requests go through Director, and they all work just fine, (excluding the home page), so it can't be an issue with the Director.php script? OR can it? I'm not seeing anything in the Apache2 error log, and I'm not seeing any PHP errors in the PHP Error log. I've tried changing the first line of Director.php to read: echo 'test'; exit(); But I still get a "Bad Request". This is the rewrite log for a request to the home page: 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (2) init rewrite engine with requested uri / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (1) pass through / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) init rewrite engine with requested uri /Director.php 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) rewrite '/Director.php' - '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) local path result: -[L,NC] Apache2 Access Log my.site.com:443 123.123.123.123 - - [18/Feb/2011:05:44:19 +0000] "GET / HTTP/1.1" 400 3223 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8" Any ideas? I don't know what else to try? UPDATE: Here's my vhost conf: RewriteEngine On RewriteLog "/LiveWebs/mysite.com/rewrite.log" RewriteLogLevel 5 # Dont rewite Crons folder ReWriteRule ^/Crons/ - [L,NC] ReWriteRule ^/phpmyadmin - [L,NC] ReWriteRule .php$ -[L,NC] # this is the problem!! RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] The problem is the line "ReWriteRule .php$ -[L,NC]". When I comment it out, the home page loads. The question is, how do I make URLS that actually end in .php go straight through (without breaking the home page)?

    Read the article

  • Trying to use Digest Authentication for Folder Protection

    - by Jon Hazlett
    StackOverflow users suggested I try my question here. I'm using Server 2008 EE and IIS 7. I've got a site that I've migrated over from XP Pro using IIS 5. On the old system, I was using IIS Password to use simple .htaccess files to control a couple of folders that I didn't want to be publicly viewable. Now that I'm running a full-blown DC with a more powerful version of IIS, I decided it'd be a good idea to start using something slightly more sophisticated. After doing my research and trying to keep things as cheap as possible with a touch of extra security, I decided that Digest Authentication would be the best way to go. My issue is this: With Anon access disabled and Digest enabled, I am never prompted for credentials. when on the server, viewing domain[dot]com/example will simply show my 401.htm page without prompting me for credentials. when on a different network/computer, viewing domain[dot]com/example again shows my 401.htm without prompting for credentials. At the site level I only have Anon enabled. Every subfolder, unless I want it protected, has just Anon enabled. Only the folders I want protected have Anon disabled and Digest enabled. I have tried editing the bindings to see if that would spark any kind of change... www.domain.com, domain.com, and localhost have all been tried. There was never a change in behavior at any permutation (aside from the page not being found when I un-bound localhost to the site). I might have screwed up when I deleted the default site from IIS. I didn't think I'd actually need it for anything, but some of what I have read online is telling me otherwise now. As for Digest settings, I have it pointed to local.domain.com, which is the name assigned to my AD Domain. I'm guessing that's right, but honestly have no clue about what a realm actually is. Would it matter that I have an A record for local.domain.com pointing to my IP address? I had problems initially with an absolute link for 401.htm pages, but have since resolved that. Instead of D:\HTTP\401.htm I've used /401.htm and all is well. I used to get error 500's because it couldn't find the custom 401.htm file, but now it loads just fine. As for some data, I was getting entries like this from access logs: 2009-07-10 17:34:12 10.0.0.10 GET /example/ - 80 - [workip] Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+InfoPath.2) 401 2 5 132 But after correcting my 401.htm links now get logs like this: 2009-07-10 18:56:25 10.0.0.10 GET /example - 80 - [workip] Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.0.11)+Gecko/2009060215+Firefox/3.0.11 200 0 0 146 I don't know if that means anything or not. I still don't get any credential challenges, regardless of where I try to sign in from ( my workstation, my server, my cellphone even ). The only thing that's seemed to work is viewing localhost and I donno what could be preventing authentication from finding it's way out of the server. Thanks for any help! Jon

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • How to understand these lines in apache.log

    - by chefnelone
    I just get 19000 lines like these in the apache.log file for my site example.com. My hosting provider shut down the hosting and notified me that I need to avoid to activate my hosting again. I understand that I got a big amount of visits but I don't know how to avoid this. 88.190.47.233 - - [27/Jun/2013:09:51:34 +0200] "GET / HTTP/1.0" 403 389 "http://example.com/" "Opera/9.80 (Windows NT 6.1; U; ru) Presto/2.10.289 Version/12.02" 417 88.190.47.233 - - [27/Jun/2013:09:51:34 +0200] "GET / HTTP/1.0" 403 389 "http://example.com/" "Opera/9.80 (Windows NT 6.1; U; ru) Presto/2.10.289 Version/12.02" 417 175.44.28.155 - - [27/Jun/2013:09:51:44 +0200] "GET /en/user/register HTTP/1.1" 403 503 "http://example.com/en/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;)" 248 175.44.29.140 - - [27/Jun/2013:09:53:19 +0200] "GET /en/node/1557?page=2 HTTP/1.0" 403 517 "http://example.com/en/node/1557?page=2" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11" 491 These are the lines from apache-error.log. There are more than 35000 lines like this. [Thu Jun 27 09:50:58 2013] [error] [client 5.39.19.183] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:03 2013] [error] [client 125.112.29.105] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:51:44 2013] [error] [client 175.44.28.155] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:53:19 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:56:53 2013] [error] [client 113.246.6.147] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:58:58 2013] [error] [client 108.62.71.180] (13)Permission denied: access to /index.php denied, referer: http://example.com/

    Read the article

  • gallery2 and nginx with rewrite return file not found for file name with space (or + sign in url)

    - by Vangel
    I have setup nginx with gallery2 on an internal server. Everything works fine under apache2 which I checked first, it used to be on apache2 Problem is: gallery2 seems to generate url with + sign in it for file names/ images which had spaces in it so a file like "may report.jpg" becomes "may+report.jpg" The URL rewrite works but gallery2 throws an error for file not found. THis does not happen under apache2. Here is my nginx rewrite rule: location / { index main.php index.html; default_type text/html; # If the file exists as a static file serve it # directly without running all # the other rewite tests on it if (-f $request_filename) { break; } } location /v/ { # if ($request_uri !~ /main.php) # { rewrite ^/v/(.*)$ /main.php?g2_view=core.ShowItem&g2_path=$1 last; # } } location /d/ { if ($request_uri !~ /main.php) { rewrite ^/d/([0-9]+)-([0-9]+)/(.*)$ /main.php?g2_view=core.DownloadItem&g2_itemId=$1&g2_serialNumber=$2&g2_fileName=$3 last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:8889; fastcgi_index main.php; fastcgi_intercept_errors on; # to support 404s for PHP files not found fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param SERVER_NAME $host; fastcgi_read_timeout 300; } the sit on its own works fine. only the images with spaces in file name do not display in album view and also when clicking the the image for full page view will throw this error Error (ERROR_MISSING_OBJECT) : Parent 103759 path report+april+456.flv in modules/core/classes/helpers/GalleryFileSystemEntityHelper_simple.class at line 98 (GalleryCoreApi::error) in modules/core/classes/GalleryCoreApi.class at line 1853 (GalleryFileSystemEntityHelper_simple::fetchChildIdByPathComponent) in modules/core/classes/helpers/GalleryFileSystemEntityHelper_simple.class at line 53 (GalleryCoreApi::fetchChildIdByPathComponent) in modules/core/classes/GalleryCoreApi.class at line 1804 (GalleryFileSystemEntityHelper_simple::fetchItemIdByPath) in modules/rewrite/classes/RewriteSimpleHelper.class at line 45 (GalleryCoreApi::fetchItemIdByPath) in ??? at line 0 (RewriteSimpleHelper::loadItemIdFromPath) in modules/rewrite/classes/RewriteUrlGenerator.class at line 103 in modules/rewrite/classes/parsers/modrewrite/ModRewriteUrlGenerator.class at line 37 (RewriteUrlGenerator::_onLoad) in init.inc at line 147 (ModRewriteUrlGenerator::initNavigation) in main.php at line 180 in main.php at line 94 in main.php at line 83 System Information Gallery version 2.2.4 PHP version 5.3.6 fpm-fcgi Webserver nginx/0.8.55 Database mysqli 5.0.95 Toolkits ImageMagick, Thumbnail, Gd Operating system Linux CentOS-55-64-minimal 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 Browser Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5 In the report above there is usable system information if that helps. I know the nginx is old but it comes as default in centos repo and I am not sure if upgrading will fix the problem or break something else it seems gallery2 must map the + to space internally but why it's not doing so with nginx I can't tell. EDIT: I just verified that if I change the '+' sign to %20 then gallery2 works. but gallery2 is generating URL as +. I found a (maybe) related problem here for IIS7 and Gallery2 http://forums.asp.net/t/1431951.aspx EDIT2: Accessing the URL without rewrite and having the + sign works. Must be something to do with rewrite. Here is the relevant apache2 rule that might be of help RewriteCond %{THE_REQUEST} /d/([0-9]+)-([0-9]+)/([^/?]+)(\?.|\ .) RewriteCond %{REQUEST_URI} !/main\.php$ RewriteRule . /main.php?g2_view=core.DownloadItem&g2_itemId=%1&g2_serialNumber=%2&g2_fileName=%3 [QSA,L] RewriteCond %{THE_REQUEST} /v/([^?]+)(\?.|\ .) RewriteCond %{REQUEST_URI} !/main\.php$ RewriteRule . /main.php?g2_path=%1 [QSA,L]

    Read the article

  • Nginx and multiple wordpress instances with fastcgi under same domain

    - by damnsweet
    My site is running on apache. two instances of wordpress exist under paths /tr/ and /eng/. I want to move the setup to nginx but could not manage to get it working. My setup consists of nging 0.7.66, php 5.3.2, and php-fpm. /tr/ and /eng/ are two separate wordpress instances located under /home/istci/webapps/wordpress_tr and /home/istci/webapps/wordpress respectively. Below is the server section from nginx.conf containing only configuration for tr, yet could not get it working either. server { listen 80; server_name www.example.com; charset utf-8; location ~ ^/$ { rewrite ^(.+)$ http://www.example.com/tr/ permanent; } location ~ /tr/.*php$ { fastcgi_pass unix:/home/istci/var/run/wptr.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/istci/webapps/wordpress_tr$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; } location /tr/ { root /home/istci/webapps/wordpress_tr/; index index.php index.html index.htm; if (!-e $request_filename) { rewrite ^(.+)$ /tr/index.php?q=$1 last; break; } if (-f $request_filename) { expires 30d; break; } } } php-fpm listens on unix:/home/istci/var/run/wptr.sock. running it in debug-mode shows no active handlers, which means no connection is made to unix socket from nginx. nginx access logs: 127.0.0.1 - - [09/Jun/2010:03:45:11 -0500] "GET /tr/ HTTP/1.0" 404 20 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.4) Gecko/20100527 Firefox/3.6.4" nginx debug logs : 2010/06/09 03:38:53 [notice] 6922#0: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-48) 2010/06/09 03:38:53 [notice] 6922#0: OS: Linux 2.6.18-164.9.1.el5PAE 2010/06/09 03:38:53 [notice] 6922#0: getrlimit(RLIMIT_NOFILE): 4096:4096 2010/06/09 03:38:53 [notice] 6923#0: start worker processes 2010/06/09 03:38:53 [notice] 6923#0: start worker process 6924 2010/06/09 03:38:53 [notice] 6923#0: start worker process 6925 2010/06/09 03:39:01 [notice] 6925#0: *1 "^(.+)$" matches "/tr/", client: 127.0.0.1, server: www.example.com, request: "GET /tr/ HTTP/1.0", host: "www.example.com" 2010/06/09 03:39:01 [notice] 6925#0: *1 rewritten data: "/tr/index.php", args: "q=/tr/", client: 127.0.0.1, server: www.example.com, request: "GET /tr/ HTTP/1.0", host: "www.example.com" Any clues about what is wrong with my configuration? Thanks.

    Read the article

  • Troubleshooting Website problems within the local network

    - by HaydnWVN
    Have an external website which opens fine on some PC's, yet seems to time out (or symptoms of timing out, but never actually does) on others. Seems to only affect (some) of our newer HP Pro 3305 MT Workstations. All of which are running Win7 32bit SP1 with all updates. Older PC's (Win7 32bit SP1 & WinXP) are unaffected. Using Google Chrome & Firefox makes no difference. Opening the website in IE9 Compatibility Mode has exactly the same symptoms. All PC's are on the same local network (Workgroup) using the same DNS server & gateway (inhouse) on the same internet connection, on the same subnet. There is no proxy server, no content filtering, no load balancing etc etc. Only group policy in effect (locally) is for Update scheduling. Local firewalls are all the same (Kaspersky WP4) and our external facing firewall has no IP specific settings. I have no control over the external website, traceroute shows the same destination on all PC's. It is a fairly popular website in our industry (Horticulture) and i'm not aware of any other people (even other sites within our sister companies) with the same problem. Update: Used Fiddler2 to monitor the HTTP request, seems its not getting fulfilled for some reason?! Request sent: GET http://www.rhs.org.uk/ HTTP/1.1 Host: www.rhs.org.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Log from Fiddler 2 of the request: This session is not yet complete. Press F5 to refresh when session is complete for updated statistics. Request Count: 1 Bytes Sent: 567 (headers:567; body:0) Bytes Received: 0 (headers:0; body:0) ACTUAL PERFORMANCE -------------- ClientConnected: 17:02:33.720 ClientBeginRequest: 17:02:39.118 GotRequestHeaders: 17:02:39.118 ClientDoneRequest: 17:02:39.118 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 46ms HTTPS Handshake: 0ms ServerConnected: 17:02:39.165 FiddlerBeginRequest: 17:02:39.165 ServerGotRequest: 17:02:39.165 ServerBeginResponse: 00:00:00.000 GotResponseHeaders: 00:00:00.000 ServerDoneResponse: 00:00:00.000 ClientBeginResponse: 00:00:00.000 ClientDoneResponse: 00:00:00.000 RESPONSE BYTES (by Content-Type) -------------- ~headers~: 0 Log of a successful request from a working PC (done this morning, excuse the timestamps being different from above): Request Count: 1 Bytes Sent: 493 (headers:493; body:0) Bytes Received: 20,413 (headers:525; body:19,888) ACTUAL PERFORMANCE -------------- ClientConnected: 08:22:47.766 ClientBeginRequest: 08:22:47.766 GotRequestHeaders: 08:22:47.766 ClientDoneRequest: 08:22:47.766 Determine Gateway: 0ms DNS Lookup: 26ms TCP/IP Connect: 30ms HTTPS Handshake: 0ms ServerConnected: 08:22:47.828 FiddlerBeginRequest: 08:22:47.828 ServerGotRequest: 08:22:47.828 ServerBeginResponse: 08:22:48.905 GotResponseHeaders: 08:22:48.905 ServerDoneResponse: 08:22:48.905 ClientBeginResponse: 08:22:48.905 ClientDoneResponse: 08:22:48.905 Overall Elapsed: 00:00:01.1388020 RESPONSE BYTES (by Content-Type) -------------- text/html: 19,888 ~headers~: 525 So my question has evolved into: What is the difference between the 2 requests and how do I determine why 1 PC is not getting a reply to it's GET request?

    Read the article

  • Installing .NET application on IIS 7.5 issues

    - by Juw
    Really need some help here. I am at a loss. I am trying to install a webservice that some other guy wrote in .NET. I have some basic IIS understanding. The webservice works just fine on my dev computer. But now i try to move the webservice to a production server and bad things happens. The webservice has been located in C:\inetpub\wwwroot\ dir on the dev server. But on this production server it is to be located in D:\services\ I have managed to install an application on the production server and everything seems fine and dandy. But when i "Test Settings" in the initial setup i get "Invalid application path" error. But i can just close it down and still install it. But when i try to access the webservice with: http://myserver.com/webservice/GetData nothing happens. Just a blank page and when i check the response headers...500 error. I don´t know what is going on here or where the problem is. I post the config file here so someone hopefully might notice something odd. Thanx in advance! EDIT: The config file is from my dev server. I just copied it to my production server...but that obviously didn´t work :-) UPDATE: I noticed that my dev server run in an Application pool with Net 4 and in "classic" "mode". On the production server it was in NET 4 but in "integrated" mode. So i changed it to "classic". I still get a blank page. But checking the log will output this: 2012-10-03 14:57:00 ip removed GET /boo/GetData - 80 - ip removed Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:15.0)+Gecko/20100101+Firefox/15.0.1 404 2 1260 203 <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <identity impersonate="true" /> <!-- Impersonate NT AUTHORITY/IUSR --> <compilation targetFramework="4.0"> <assemblies> <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b7735c561131e089" /> </assemblies> </compilation> <pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <httpErrors existingResponse="PassThrough" /> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> <directoryBrowse enabled="false" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <!-- Configure the WCF REST service base address via the global.asax.cs file and the default endpoint via the attributes on the <standardEndpoint> element below --> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/DataModel.csdl|res://*/DataModel.ssdl|res://*/DataModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=someip;initial catalog=db_90;User ID=user1;Password=access2;multipleactiveresultsets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • The Windows Browser Ballot Screen Offers Web Browser Choice to European Users

    - by Matthew Guay
    Since March, our friends across the pond in Europe get to decide which browser they want to install with their Windows OS. Today we thought we would take a look at the ballot choices, some are well known, and others you may not have heard of. Windows users in European countries should start seeing the so called “Browser Ballot Screen” after installing the Windows Update KB976002 (link below). The browser ballot offers a dozen different browsers, including some you’ve likely never heard of.  They each have some unique features, and are all free, and here we take a quick look at each of them. Internet Explorer 8 Internet Explorer is the world’s most used web browser, as it’s bundled with Windows. It also includes several unique features, including Accelerators that make it easy to search or find a map of a location, and InPrivate filtering to directly control what sites can get personal information.  Additionally, it offers great integration with Windows Touch and the new taskbar in Windows 7. IE 8 runs on Windows XP and newer, and is bundled with Windows 7. Mozilla Firefox 3.6 Firefox is the most popular browser other than Internet Explorer.  It is the modern descendant of Netscape, and is loved by web developers for its adherence to web standards, openness, and expandability.  It offers thousands of Add-ons and themes to let you customize it to fit your preferences. The most recent version has added Personas, which are quick, lightweight themes to let you personalize the look your browser. It’s open source, and runs on all modern versions of Windows, Mac OS X, and Linux. Of course thanks to Asian Angel, our resident browser expert, you can check out several articles regarding this popular IE alternative. Google Chrome 4 Google Chrome has gained an impressive amount of market share during its short time in the market. It offers a minimalistic interface and fast speeds with intensive web applications. The address bar is also a search bar, so you can enter a search query or web address and quickly get the information you need. With version 4 you can add a growing number of extensions, personalize it with a variety of stylish themes, and automatically translate foreign websites into your own language. Opera 10.50 Although Opera has been around for over a decade, relatively few users have used it. With the new 10.50 release, Opera has many unique features packed in a sleek UI. It integrates great with Aero and the Windows 7 taskbar, and lets you preview the contents of your websites in the tab bar. It also includes Opera Unite, a small personal web server to make file sharing easy, Opera Turbo to speed up your internet when the connection is slow, and Opera Link to keep all your copies of Opera in sync. It’s a popular browser on many mobile devices, and version 10.50 has a lot of enhancements. Apple Safari 4 Safari is the default browser in Mac OS X, and starting with version 3 it has been available for Windows as well. It’s based on Webkit, the popular new rendering engine that provides great speed and standards compatibility.  Safari 4 lets you browse your browsing history in a unique Coverflow interface, and shows your Top Sites in a fancy, 3D interface.  It’s also great for viewing mobile websites for the iPhone and other mobile devices through Developer Tools. Flock 2.5 Based on the popular Firefox core, Flock brings a multitude of social features to your browsing experience. You can view the latest YouTube videos, Flickr pictures, update your favorite social network, and keep up with your webmail thanks to It’s integration with a wide variety of services. You can even post to your blog through the integrated blog editor. If your time online is mostly spent in social services, this may be a browser you want to check out. Maxthon 2.5 Maxthon is a unique browser that builds on Internet Explorer to bring more features with IE’s rendering. Formerly known as MyIE2, Maxthon was popular for bringing tabbed browsing with IE rendering during the days of IE 6.  Today Maxthon supports a wide range of plugins and skins, so you can customize it however you want. It includes mouse gestures, a web accelerator to speed up pokey internet connections, a content blocker to remove unwanted content from sites, an online account to backup your favorites, and a nice download manager. Avant Browser Another nice browser based on Internet Explorer, Avant brings a wide variety of features in a nice brushed-metal interface. It includes an integrated AutoFill for forms, mouse gestures, customizable skins, and privacy protection features. It also includes a Flash blocker that will only load flash in webpages when you select them. You can also integrate Avant with an online account to store your bookmarks, feeds, settings and passwords online. Sleipnir Sleipnir is a customizable browser meant for advance users that is quite popular in Japan. It’s built on the Trident engine and virtually every aspect of is customizable unlike Internet Explorer.   FlashPeak SlimBrowser SlimBrowser from FlashPeak incorporates a lot of features like Popup Killer, Auto Login, site filtering and more. It’s based on Internet Explorer but offers a lot more customizable options out of the box.   K-meleon This basic browser is light on system resources and based on the Gecko engine. It’s been in development for years on SourceForge, and if you like to tweak virtually any aspect of your browser, this might be a good choice for you.   GreenBrowser GreenBrowser is based on Internet Explorer and is available in several languages. It has a large amount of features out of the box and is light on system resources.   Conclusion The European Union asked for more choices in the web browser they could choose from when installing Windows, and with the Browser Ballot Screen, they certainly get a variety to choose from.  If you’ve tried out some of the lesser known browsers, or think some important ones have been left out, leave a comment and tell us about it. Learn More About the Browser Ballot Screen and Download Alternatives to IE Windows Update KB976002 Similar Articles Productive Geek Tips Set the Default Browser on Ubuntu From the Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Hidden Files and Folders in Ubuntu File BrowserSet the Default Browser and Email Client in UbuntuAccess Multiple Browsers from Firefox with Browser View Plus TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Why does Google mark one e-mail as spam while does not the other?

    - by nKn
    I've a Postfix installation which works fine, I don't get any trouble with mails sent through a mail client (in my case, Thunderbird or RoundCube) when the To: address is a GMail account. However, I recently needed to use the PHPMailer tool to send some e-mails to some GMail accounts, so I configured an account to be used via SASL authentication + TLS. I don't mean mass mailing, just 2-3 mails. If I send the e-mail from the Thunderbird or RoundCube clients, the mail is not marked as spam. However, if I use PHPMailer, it always gets catalogued as spam. So I compared both headers and I just can't find the reason why the second is marked as spam while the first one is just ok. The first header sent from a mail client which is not marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230573oab; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) X-Received: by 10.60.23.39 with SMTP id j7mr45544050oef.20.1408471699715; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id t5si27115082oej.10.2014.08.19.11.08.18 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id D8F69120293D; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from [192.168.1.111] (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id 910341202624 for <[email protected]>; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= Message-ID: <[email protected]> Date: Tue, 19 Aug 2014 19:08:24 +0100 From: My Name <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: My other account <[email protected]> Subject: . Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit . The second header sent from PHPMailer which is always marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230832oab; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) X-Received: by 10.60.121.67 with SMTP id li3mr44086252oeb.17.1408471930520; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id w8si27103806obn.30.2014.08.19.11.12.10 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id 1999D120293D; Tue, 19 Aug 2014 19:12:09 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471929; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=K7tcPyArzSTY91VEw6mAAFtDurSGwgTLGkfUZdC5mqsg0g/1LzmZkgwdjj4NdJa6M E2kDz3dwYN8FcZmbampJYFXxj4NQVtSnzjiWV40rpfOFqD2rXDGNIyB2QOjBZZ4WK3 7s4lyoJ/BrdQH4en8ctLVsDHed/KpHD4iGFEl67E= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from rpi.mydomain.com (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id B42AF1202624 for <[email protected]>; Tue, 19 Aug 2014 19:12:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471928; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=iXPM0tS36swudPTT4FOHHtPi5Ll6LbR60kNqCinZ8utcWoFE31SFTpoMEq5aCM5ux wQMdFiN8c6vkjRGabmvqFTTIbwJsrToHo/4+Lt5HEBoQQE2Y3T+xGmnmGAHCS6stKB yb7SVmtrIAsVtSMKA8VYIbmu2oYqV3afYt7g0OMQ= Date: Tue, 19 Aug 2014 20:12:07 +0200 To: [email protected] From: Trying another account <[email protected]> Reply-to: Trying another account <[email protected]> Subject: . Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8" . I also tried: Adding a User-Agent header to match the first one. Removing the X-Mailer header. No one of them made a difference. Is there some significant difference which is making the second e-mail to be marked as spam by Google?

    Read the article

  • Mozilla Weave can't sync Firefox. What's wrong?

    - by Mehper C. Palavuzlar
    For the last few days, Mozilla Weave can't sync. Below is the activity log. Any ideas? 2010-05-02 20:47:15 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-02 20:47:15 Service.Main CONFIG Starting backoff, next sync at:Sun May 02 2010 21:16:09 GMT+0300 (GTB Yaz Saati) 2010-05-02 20:47:15 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-02 21:16:09 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 21:16:30 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 21:16:30 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 21:26:50 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 21:26:50 Engine.Clients INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Clients INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Clients DEBUG Total (ms): sync 6, processIncoming 3, uploadOutgoing 0, syncStartup 3, syncFinish 0 2010-05-02 21:26:50 Engine.Bookmarks INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Bookmarks INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Bookmarks DEBUG Total (ms): sync 13, processIncoming 5, uploadOutgoing 0, syncStartup 3, syncFinish 3 2010-05-02 21:26:50 Engine.Forms INFO 1 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Forms INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Forms INFO Uploading all of 1 records 2010-05-02 21:26:50 Collection DEBUG POST Length: 388 2010-05-02 21:27:06 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/forms 2010-05-02 21:27:06 Engine.Forms DEBUG Total (ms): sync 15924, processIncoming 3, uploadOutgoing 15918, syncStartup 3, syncFinish 0, createRecord 1 2010-05-02 21:27:06 Engine.History INFO 55 outgoing items pre-reconciliation 2010-05-02 21:27:06 Engine.History INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:09 Engine.History INFO Uploading all of 55 records 2010-05-02 21:27:09 Collection DEBUG POST Length: 35337 2010-05-02 21:27:32 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/history 2010-05-02 21:27:32 Engine.History DEBUG Total (ms): sync 25588, processIncoming 4, uploadOutgoing 25580, syncStartup 3, syncFinish 0, createRecord 2540 2010-05-02 21:27:32 Engine.Passwords INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Passwords INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Passwords DEBUG Total (ms): sync 8, processIncoming 4, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Prefs INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Prefs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Prefs DEBUG Total (ms): sync 8, processIncoming 3, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Tabs INFO 1 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Tabs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Tabs INFO Uploading all of 1 records 2010-05-02 21:27:32 Collection DEBUG POST Length: 393 2010-05-02 21:27:54 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/tabs 2010-05-02 21:27:54 Engine.Tabs DEBUG Total (ms): sync 21943, processIncoming 3, uploadOutgoing 21936, syncStartup 3, syncFinish 0, createRecord 8 2010-05-02 21:27:54 Service.Main INFO Sync completed successfully 2010-05-02 22:27:53 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 22:28:14 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 22:28:14 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 22:28:16 Net.Resource DEBUG GET fail 503 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 22:28:16 Service.Main DEBUG Exception: aborting sync, failed to get collections No traceback available 2010-05-02 23:28:15 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-03 00:26:42 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 00:31:03 RecordMgr DEBUG Failed to import record: App. Quitting JS Stack trace: Res__request(...)@resource.js:208 < Res_get()@resource.js:271 < RecordMgr_import("https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global")@wbo.js:119 < WeaveSvc__remoteSetup()@service.js:824 < ()@service.js:1187 < WrappedNotify()@util.js:114 < WrappedLock()@util.js:86 < WrappedCatch()@util.js:65 < sync(false)@service.js:1146 < ([object Object])@service.js:414 < notify([object XPCWrappedNative_NoHelper])@util.js:629 2010-05-03 00:31:03 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2010-05-03 00:31:03 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-03 00:31:03 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-03 17:26:25 Service.Main INFO Loading Weave 1.2.3 2010-05-03 17:26:25 Engine.Bookmarks DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Forms DEBUG Engine initialized 2010-05-03 17:26:25 Engine.History DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Passwords DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Prefs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Resetting tabs last sync time 2010-05-03 17:26:25 Service.Main INFO Mozilla/5.0 (Windows; U; Windows NT 6.1; tr; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) 2010-05-03 17:26:26 Service.Main DEBUG Caching URLs under storage user base: https://sj-weave03.services.mozilla.com/1.0/mehper/ 2010-05-03 17:26:30 Service.Main DEBUG Autoconnecting in 3 seconds 2010-05-03 17:26:36 Service.Main INFO Logging in user mehper 2010-05-03 17:45:46 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 17:53:18 Service.Main DEBUG Exception: Could not acquire lock No traceback available

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Delphi Indy IDHTTP question

    - by user327175
    I am trying to get the captcha image from a AOL, and i keep getting an error 418. unit imageunit; /// /// h t t p s://new.aol.com/productsweb/ /// interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, IdIOHandler, IdIOHandlerSocket, IdIOHandlerStack, IdSSL, IdSSLOpenSSL, IdIntercept, IdZLibCompressorBase, IdCompressorZLib, IdCookieManager, IdBaseComponent, IdComponent, IdTCPConnection, IdTCPClient, IdHTTP,jpeg,GIFImg, ExtCtrls; type TForm2 = class(TForm) IdHTTP1: TIdHTTP; IdCookieManager1: TIdCookieManager; IdCompressorZLib1: TIdCompressorZLib; IdConnectionIntercept1: TIdConnectionIntercept; IdSSLIOHandlerSocketOpenSSL1: TIdSSLIOHandlerSocketOpenSSL; Panel1: TPanel; Image1: TImage; Panel2: TPanel; Button1: TButton; Edit1: TEdit; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form2: TForm2; implementation {$R *.dfm} procedure TForm2.Button1Click(Sender: TObject); var JPI : TJPEGImage; streamdata:TMemoryStream; begin streamdata := TMemoryStream.Create; try idhttp1.Get(edit1.Text, Streamdata); Except { Handle exceptions } On E : Exception Do Begin MessageDlg('Exception: '+E.Message,mtError, [mbOK], 0); End; End; //h t t p s://new.aol.com/productsweb/WordVerImage?20890843 //h t t p s://new.aol.com/productsweb/WordVerImage?91868359 /// /// gives error 418 unused /// streamdata.Position := 0; JPI := TJPEGImage.Create; Try JPI.LoadFromStream ( streamdata ); Finally Image1.Picture.Assign ( JPI ); JPI.Free; streamdata.Free; End; end; end. Form: object Form2: TForm2 Left = 0 Top = 0 Caption = 'Form2' ClientHeight = 247 ClientWidth = 480 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False PixelsPerInch = 96 TextHeight = 13 object Panel1: TPanel Left = 0 Top = 41 Width = 480 Height = 206 Align = alClient TabOrder = 0 ExplicitLeft = -8 ExplicitTop = 206 ExplicitHeight = 41 object Image1: TImage Left = 1 Top = 1 Width = 478 Height = 204 Align = alClient ExplicitLeft = 5 ExplicitTop = 17 ExplicitWidth = 200 ExplicitHeight = 70 end end object Panel2: TPanel Left = 0 Top = 0 Width = 480 Height = 41 Align = alTop TabOrder = 1 ExplicitLeft = 40 ExplicitTop = 32 ExplicitWidth = 185 object Button1: TButton Left = 239 Top = 6 Width = 75 Height = 25 Caption = 'Button1' TabOrder = 0 OnClick = Button1Click end object Edit1: TEdit Left = 16 Top = 8 Width = 217 Height = 21 TabOrder = 1 end end object IdHTTP1: TIdHTTP Intercept = IdConnectionIntercept1 IOHandler = IdSSLIOHandlerSocketOpenSSL1 MaxAuthRetries = 100 AllowCookies = True HandleRedirects = True RedirectMaximum = 100 ProxyParams.BasicAuthentication = False ProxyParams.ProxyPort = 0 Request.ContentLength = -1 Request.Accept = 'image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-s' + 'hockwave-flash, application/cade, application/xaml+xml, applicat' + 'ion/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-' + 'application, /' Request.BasicAuthentication = False Request.Referer = 'http://www.yahoo.com' Request.UserAgent = 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.1) Gecko/201001' + '22 firefox/3.6.1' HTTPOptions = [hoForceEncodeParams] CookieManager = IdCookieManager1 Compressor = IdCompressorZLib1 Left = 240 Top = 80 end object IdCookieManager1: TIdCookieManager Left = 360 Top = 136 end object IdCompressorZLib1: TIdCompressorZLib Left = 368 Top = 16 end object IdConnectionIntercept1: TIdConnectionIntercept Left = 304 Top = 72 end object IdSSLIOHandlerSocketOpenSSL1: TIdSSLIOHandlerSocketOpenSSL Intercept = IdConnectionIntercept1 MaxLineAction = maException Port = 0 DefaultPort = 0 SSLOptions.Mode = sslmUnassigned SSLOptions.VerifyMode = [] SSLOptions.VerifyDepth = 0 Left = 192 Top = 136 end end If you go to: h t t p s://new.aol.com/productsweb/ you will notice the captcha image has a url like: h t t p s://new.aol.com/productsweb/WordVerImage?91868359 I put that url in the edit box and get an error. What is wrong with this code.

    Read the article

  • How does this decorator make a call to the 'register' method?

    - by BryanWheelock
    I'm trying to understand what is going on in the decorator @not_authenticated. The next step in the TraceRoute is to the method 'register' which is also located in django_authopenid/views.py which I just don't understand because I don't see anywhere that register is even mentioned in signin() How is the method 'register' called? def not_authenticated(func): """ decorator that redirect user to next page if he is already logged.""" def decorated(request, *args, **kwargs): if request.user.is_authenticated(): next = request.GET.get("next", "/") return HttpResponseRedirect(next) return func(request, *args, **kwargs) return decorated @not_authenticated def signin(request,newquestion=False,newanswer=False): """ signin page. It manage the legacy authentification (user/password) and authentification with openid. url: /signin/ template : authopenid/signin.htm """ request.encoding = 'UTF-8' on_failure = signin_failure next = clean_next(request.GET.get('next')) form_signin = OpenidSigninForm(initial={'next':next}) form_auth = OpenidAuthForm(initial={'next':next}) if request.POST: if 'bsignin' in request.POST.keys() or 'openid_username' in request.POST.keys(): form_signin = OpenidSigninForm(request.POST) if form_signin.is_valid(): next = clean_next(form_signin.cleaned_data.get('next')) sreg_req = sreg.SRegRequest(optional=['nickname', 'email']) redirect_to = "%s%s?%s" % ( get_url_host(request), reverse('user_complete_signin'), urllib.urlencode({'next':next}) ) return ask_openid(request, form_signin.cleaned_data['openid_url'], redirect_to, on_failure=signin_failure, sreg_request=sreg_req) elif 'blogin' in request.POST.keys(): # perform normal django authentification form_auth = OpenidAuthForm(request.POST) if form_auth.is_valid(): user_ = form_auth.get_user() login(request, user_) next = clean_next(form_auth.cleaned_data.get('next')) return HttpResponseRedirect(next) question = None if newquestion == True: from forum.models import AnonymousQuestion as AQ session_key = request.session.session_key qlist = AQ.objects.filter(session_key=session_key).order_by('-added_at') if len(qlist) > 0: question = qlist[0] answer = None if newanswer == True: from forum.models import AnonymousAnswer as AA session_key = request.session.session_key alist = AA.objects.filter(session_key=session_key).order_by('-added_at') if len(alist) > 0: answer = alist[0] return render('authopenid/signin.html', { 'question':question, 'answer':answer, 'form1': form_auth, 'form2': form_signin, 'msg': request.GET.get('msg',''), 'sendpw_url': reverse('user_sendpw'), }, context_instance=RequestContext(request)) Looking at the request, it seems that account/register/ does reference the register method with 'PATH_INFO': u'/account/register/' Here is the request: <WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'username': [u'BryanWheelock'], u'email': [u'[email protected]'], u'bnewaccount': [u'Signup']}>, COOKIES:{'__utma': '127460431.1218630960.1266769637.1266769637.1266864494.2', '__utmb': '127460431.3.10.1266864494', '__utmc': '127460431', '__utmz': '127460431.1266769637.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)', 'sessionid': 'fb15ee538320170a22d3a3a324aad968'}, META:{'CONTENT_LENGTH': '74', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'DOCUMENT_ROOT': '/usr/local/apache2/htdocs', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate,sdch', 'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8', 'HTTP_CACHE_CONTROL': 'max-age=0', 'HTTP_CONNECTION': 'close', 'HTTP_COOKIE': '__utmz=127460431.1266769637.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utma=127460431.1218630960.1266769637.1266769637.1266864494.2; __utmc=127460431; __utmb=127460431.3.10.1266864494; sessionid=fb15ee538320170a22d3a3a324aad968', 'HTTP_HOST': 'workproject.com', 'HTTP_ORIGIN': 'http://workproject.com', 'HTTP_REFERER': 'http://workproject.com/account/signin/complete/?next=%2F&janrain_nonce=2010-02-22T18%3A49%3A53ZG2KXci&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2010-02-22T18%3A49%3A53Znxxxxxxxxxw&openid.return_to=http%3A%2F%2Fworkproject.com%2Faccount%2Fsignin%2Fcomplete%2F%3Fnext%3D%252F%26janrain_nonce%3D2010-02-22T18%253A49%253A53ZG2KXci&openid.assoc_handle=AOQobUepU4xs-kGg5LiyLzfN3RYv0I0Jocgjf_1odT4RR9zfMFpQVpMg&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle&openid.sig=Jf76i2RNhqpLTJMjeQ0nnQz6fgA%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItxxxxxxxxxs9CxHQ3PrHw_N5_3j1HM&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOaxxxxxxxxxxx4s9CxHQ3PrHw_N5_3j1HM', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.7 Safari/532.9', 'HTTP_X_FORWARDED_FOR': '96.8.31.235', 'PATH': '/usr/bin:/bin', 'PATH_INFO': u'/account/register/', 'PATH_TRANSLATED': '/home/spirituality/webapps/work/spirit_app.wsgi/account/register/', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '59956', 'REQUEST_METHOD': 'POST', 'REQUEST_URI': '/account/register/', 'SCRIPT_FILENAME': '/home/spirituality/webapps/spirituality/spirit_app.wsgi', 'SCRIPT_NAME': u'', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': '[no address given]', 'SERVER_NAME': 'workproject.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.0', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache/2.2.12 (Unix) mod_wsgi/2.5 Python/2.5.4', 'mod_wsgi.application_group': 'www.workProject.com|', 'mod_wsgi.callable_object': 'application', 'mod_wsgi.listener_host': '', 'mod_wsgi.listener_port': '25931', 'mod_wsgi.process_group': '', 'mod_wsgi.reload_mechanism': '0', 'mod_wsgi.script_reloading': '1', 'mod_wsgi.version': (2, 5), 'wsgi.errors': <mod_wsgi.Log object at 0xb7ce0038>, 'wsgi.file_wrapper': <built-in method file_wrapper of mod_wsgi.Adapter object at 0xb7e94b18>, 'wsgi.input': <mod_wsgi.Input object at 0x999cc78>, 'wsgi.multiprocess': True, 'wsgi.multithread': False, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>

    Read the article

< Previous Page | 10 11 12 13 14 15 16  | Next Page >