Search Results

Search found 6517 results on 261 pages for 'localhost'.

Page 138/261 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • XP IIS no longer listen to port 80 or 443 after installing Oracle 9i HTTP Server

    - by Nassign
    I have installed Oracle 9i HTTP Server together with the database. After restarting the PC, even though i restarted the IIS and stopped the Oracle HTTP Server. When I go to http://localhost/ The starting page is already the Oracle HTTP Server index page. Also when I look at the port that inetinfo.exe is listening to, it no longer listens to port 80 and the SSL port 443, even if i restart the IIS and World Wide Web Publishing service. Any idea what setting did oracle changed when I installed oracle 9i? The executable associated with the OracleOraHome90HTTPServer is C:\oracle\ora90\Apache\Apache\Apache.exe I already checked the tasklist and Apache is really not running. But there is no process listening to port 80 still even if the IIS restarts successfully. Any ideas how to fix this?

    Read the article

  • Postfix configuration problem

    - by dhanya
    Can anyone help me by giving your postfix configuration file as a reference so that I can find my mistakes? I'm working on SUSE Linux Enterprise Server. My goal is to set up a mailserver in a campus network. Postfix shows it is running but no mail is sent to var/spool/mail I send mail using mail command at terminal. Here is my main.cf file, please help me finding a solution: readme_directory = /usr/share/doc/packages/postfix-doc/README_FILES inet_protocols = all biff = no mail_spool_directory = /var/mail canonical_maps = hash:/etc/postfix/canonical virtual_alias_maps = hash:/etc/postfix/virtual virtual_alias_domains = hash:/etc/postfix/virtual relocated_maps = hash:/etc/postfix/relocated transport_maps = hash:/etc/postfix/transport sender_canonical_maps = hash:/etc/postfix/sender_canonical masquerade_exceptions = root masquerade_classes = envelope_sender, header_sender, header_recipient myhostname = cmail.cetmail delay_warning_time = 1h message_strip_characters = \0 program_directory = /usr/lib/postfix inet_interfaces = all #inet_interfaces = 127.0.0.1 masquerade_domains = cetmail mydestination = cmail.cetmail, localhost.cetmail, cetmail defer_transports = mynetworks_style = subnet disable_dns_lookups = no relayhost = postfix mailbox_command = cyrus mailbox_transport = strict_8bitmime = no disable_mime_output_conversion = no smtpd_sender_restrictions = hash:/etc/postfix/access smtpd_client_restrictions = smtpd_helo_required = no smtpd_helo_restrictions = strict_rfc821_envelopes = no smtpd_recipient_restrictions = permit_mynetworks,reject_unauth_destination smtp_sasl_auth_enable = no smtpd_sasl_auth_enable = no smtpd_use_tls = no smtp_use_tls = no alias_maps = hash:/etc/aliases mailbox_size_limit = 0 message_size_limit = 10240000

    Read the article

  • Remote servlet by mod_jk ?

    - by marioosh.net
    I have remote servlet for example: h*tps://[ip_address]/servlet (h*tps://[ip_address]/ - Tomcat main page) that i need to configure on local Apache HTTPd server. My mod_jk configuration looks like below, but doesn't work. Something works, because when i type h*tps://localhost/console in a browser i get Tomcat error page "HTTP Status 404 - /console/". JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel info JkMount /console/* ajp13 workers.properties: worker.ajp13.type=ajp13 worker.ajp13.host=[ip_address] worker.ajp13.port=8009 Remote Tomcat is configured good i think - listen on port 8009 and servlet h*tps://[ip_address]/servlet works too. <Connector port="8009" protocol="AJP/1.3" redirectPort="443" /> Anybody helps ?

    Read the article

  • How to connect ftp server outside lan?

    - by srisar
    hi all , im setting up home ftp server, so i can share some files with my friends outside my lan. I am using filezilla server and everything configured. http://www.canyouseeme.org/ even see my port 21 as opend, but when i connect through fit client or through web browser, its saying "530 User saravana access denied." how can i solve this problem, i checked the user name and password, everything is good, but i didnt sent any passive mode, (i didnt know how to set), if that is causing the trouble can anyone help me, bu the way i can connect locally through localhost.

    Read the article

  • What ssh command would I use to set up "backwards listening"

    - by Nathan
    Machine A is behind a firewall. I have physical access to it, but I want to log into it remotely, and I do not have access to the firewall settings. Machine B is remote, and not behind any firewall. (It's my linode) Machine C is the mobile device I'm going to attempt to ssh into A from. Is there an ssh command that I can run from machine A that connects to machine B and stays open, that will allow me to log into A from C, via B? From the manual I'd guess it would be to run the follwing on A ssh -R *:9999:localhost:22 me@B and then run this on C ssh me@B -p 9999 but the previous command reports "Connection refused."

    Read the article

  • Terminal issues on OS X with XAMPP and Yii framework

    - by Jake
    I'm trying to configure the Yii framework but am having problems with the terminal commands, and am also having difficulty setting up the webapp demo. I am using Mac OS X Snow Leopard and have installed XAMPP and placed the 'yii' folder in the xamppfiles \ htdocs folder. I have verified the http://localhost/yii/requirements/index.php which is working fine. I have tried the following but nothing seems to work so if anyone can point out what I'm doing wrong it would be very much appreciated. In fact any directory I search for is not recognized (see below) so I'm thinking I need to do something else for this to work but I have searched and searched but found no answer. ~ Jake$ /applications/xampp/xamppfiles/yii -bash: /applications/xampp/xamppfiles/yii: No such file or directory ~ Jake$ /documents -bash: /documents: No such file or directory

    Read the article

  • Receicing POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • how to enable remote access to a MySQL server on an AZURE virtual machine

    - by Rees
    I have an AZURE virtual machine with a MySQL server installed on it running ubuntu 13.04. I am trying to remote connect to the MySQL server however get the simple error "Can't connect to MySQL server on {IP}" I have already done the follow: * commented out the bind-address within the /etc/mysql/my.cnf * commented out skip-external-locking within the same my.cnf * "ufw allow mysql" * "iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT" * setup an AZURE endpoint for mysql * "sudo netstat -lpn | grep 3306" does indeed show mysql LISTENING * "GRANT ALL ON *.* TO remote@'%' IDENTIFIED BY 'password'; * "GRANT ALL ON *.* TO remote@'localhost' IDENTIFIED BY 'password'; * "/etc/init.d/mysql restart" * I can connect via SSH tunneling, but not without it * I have spun up an identical ubuntu 13.04 server on rackspace and SUCCESSFULLY connected using the same procedures outlined here. NONE of the above works on my azure server however. I thought the creation of an endpoint would work, but no luck. Any help please? Is there something I'm missing entirely?

    Read the article

  • How to set up server/domain name correctly in hosts file with HTTPS

    - by Byakugan
    I am trying to do local network and I am using these kind of types of network. 1) Main server which connects to internet with static IP 2) Second computer connected to first one locally with address like 192.168.0.2 - when I write this address to address line it is like i wrote localhost in original main server - so it should show my local web browser etc ... It has domain name this IP and connected router for it ... example www.domain.com so I added to my main server hosts file (linux powered) lines like these: 192.168.0.2 domain.com www.domain.com It was working ok when I entered my domain name in local computer it showed my site ... But after some time I added HTTPS cerfiticate and added this line to my apatche server: Redirect permanent / https://www.domain.com/ And now it does not work even when i add something like this to my hosts file: 192.168.0.2 https://www.domain.com So any idea how do do this thing work? Thank you.

    Read the article

  • Can't access a local site site on LAN

    - by Dilawar
    I have lighttpd setup on a machine (say ip is 10.107.105.13) with following details. inet addr : 10.107.105.13 Bcast : 10.107.111.255 Mask : 255.255.240.0 I can access my site on this computer by using firefox http://localhost/index.html. Now I am trying to access this site from another computer with following details inet addr : 10.14.42.7 Bcast : 10.14.42.255 Mask : 255.255.255.0 But it says 'access denied'. nmap 10.107.105.13 gives the following output. PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 1234/tcp open hotline 3306/tcp open mysql 9418/tcp open git Following is the output of iptables -L -n -v on 10.107.105.13 141 11207 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 FORWARD and OUTPUT section empty. What is wrong with all this?

    Read the article

  • Creating cookieless application on development machine with asp.net

    - by zaladane
    I am thinking about setting up a new domain to host static content on my website and have it cookieless just like Stackoverflow with their static domain. So before going ahead and buying the domain and setting it up I wanted to test it on my developement machine first under localhost (I have to mention that i am planning on having IIS running on my new domain for the static files). I therefore created a new application under IIS and disabled session state and forms authentication. When my main application needs resources like css, images and js , I use the path to the "static" application where they are hosted. The problem is that when I look at the request and the response for the requested files, they still have the session_id cookie defined as well as the asp.net authentication cookie. Is it at all possible to accomplish what i am trying to do on a development machine or do i have to just go ahead and purchase the new domain which hopefully with make things right? I tried to read about cookieless domain but can't figure out what i might be missing.

    Read the article

  • Pure-FTPd: Which IP Address To Use For Login?

    - by mschooler93
    I am trying to connect to my VirtualBox LAMP Server with FileZilla, and it's not working. I have a static IP set, a FTP user set (with FTPd), and can connect successfully with ftp localhost, but not with Filezilla with the same credentials. I suspect this is because I am using the wrong IP address in the Host box. so is there a command I can use to display the IP address used to connect to my FTP? BTW, error code given in FileZilla is: Status: Connection established, waiting for welcome message... Response: 425 Sorry, invalid address given Error: Could not connect to server

    Read the article

  • Is there a remote file transfer command that preserves nanosecond timestamps?

    - by Denver Gingerich
    I've tried transferring files using scp and rsync on Ubuntu 10.04, but neither of them preserves more than second precision. Here's an example: $ touch test1 $ scp -p test1 localhost:test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.000000000 -0500 test2 $ cp -p test1 test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test2 $ A straight copy works fine, but scp truncates the timestamp. Are there any tools (preferably similar to scp or rsync in their usage) that do remote file transfers while preserving nanosecond timestamps? I could write a hacky script to do it, but I'd rather not.

    Read the article

  • archiva/jetty with nginx ssl proxy: getting http responses

    - by numb3rs1x
    I've been banging my head against this for awhile now. I have an archiva repository server I'm trying to proxy through nginx with ssl offloading. archiva has a jetty server built in that is listening on port 8008 of the localhost. I'm able to get to the archiva server through the proxy, but it wants to return http responses and not https responses. I thought that setting the following headers was supposed to tell the server to respond with https: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; I also tried "proxy_redirect default;". It seems that the jetty/archiva server is not recognizing these or there needs to be something more. I've been scouring forums and as far as I can tell, everything is set as it should be. I'm not sure where else to check at this point. Has anyone had any success with this?

    Read the article

  • CNTLM issue with intranet (maybe DNS)

    - by htorque
    On my Linux box I need to use an ISA proxy that requires authentication to reach the internet. I therefore installed CNTLM and configured it to point to the proxy address and listen on port 4321. I then configured my GNOME distribution to use localhost:4321 as global proxy for HTTP and HTTPS. The result: I can connect to the internet. I can ping intranet IPs, I do receive name resolution for intranet sites, yet I cannot ping them or open any intranet site in a browser (configured to use the distributions proxy) unless I use the site's IP address. I tried blocking the intranet IP range in the CNTLM config file without luck.

    Read the article

  • How rotate TomCat 6 logs on Windows every night

    - by Danilo Brambilla
    Hi all, our TomCat 6 is running on a Windows Server 2003 server producing some logs on Program Files\Apache Software Foundation\Tomcat 6.0\logs folder. Only catalina.YYYY-MM-DD.log rotates every night. Admin. Host-Manager. Jakarta. LocalHost. Manager. stderr. stdout does not roate and are dated at the last server restart date. These files are most empty and always locked. How can I set TomCat to rotate all these logs every night (if possible without server/service restart)? Thank you in advance for help.

    Read the article

  • I can't externally access my home server's wordpress website

    - by piratepartypumpkin
    Basically, I can access everything just fine using 127.0.0.1, but if I use my external IP (123.123.123.123), I get page not found. My router is port forwarding HTTP port 80 to port 8080 on my servers internal IP address. In other words: (Application: HTTP | Start: 80 | End: 8080 | Protocol: Both | IP Address 192.168.0.101 | Enable [YES]) I know it's forwarding properly, because when I stop port forwarding, I can access my router page by using my external IP. My virtual hosts file is: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /opt/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> and my httpd.conf file is: Listen 80 Servername localhost:80 DocumentRoot "/opt/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/opt/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

  • Can I set up a 'Deny from x' that overrides other confs for debugging?

    - by Nick T
    I'm currently working on developing/deploying a Django application on Apache and am often fiddling with the debug settings which alter how Django accepts connections, ignoring or using ALLOWED_HOSTS. If DEBUG is False, it uses them, which is handy to keep up some walls around my construction site. However, the useful info it spits out when True is quite nice. I'm currently just using an SSH tunnel and just allowing localhost when DEBUG is False, but how can I keep everyone out without relying on the aforementioned ALLOWED_HOSTS? Editing the httpd.conf file which is in source control is a bit irritating; I've accidentally committed a few botched configs.

    Read the article

  • Deploying web services on a RHEL 5 box using Apache/Tomcat/Axis/Java.

    - by Deepak Konidena
    Hi, I am new to the Web services scene. I currently have access to a RHEL5 box and i need to deploy a java web service on it. It runs apache and i know this because i have a website hosted on this machine. Now, i want to deploy a web service on to this website to be able to just pass a link to someone when they need to access my web service. Could someone point out a resource or explain what all i need to get the webservice deployed using Tomcat/Apache Axis and Java. I have done this on Windows (hosted on localhost) but couldn't quite figure out things on linux. Any help is greatly appreciated. Thanks. Deepak.

    Read the article

  • mysql remote connection [closed]

    - by Fel
    I spoke with my host supporter to find what is my sql hostname. He said that the only way is add permissions in mysql remote (cpanel) to my actual ip. So the config in heidiSQL for example will be localhost userxxx passxxx But i have a dynamic ip, so i need to change the permissions every time, correct? Add the flag % probably is not a good idea, so how can I solve this problem? Why I dont have something like mysql.something.com ? Sorry, if this question is too basic.

    Read the article

  • Perl not working with Nginx via fastcgi, cannot decipher error logs

    - by ProfessionalAmateur
    Im running CentOS 6.2, Nginx 1.2.3 following these Linode Instructions to get Perl to work with Nginx I've done everything upto the point of testing an actual Perl file. When I do this the browser says: The page you are looking for is temporarily unavailable. Please try again later. And my Nginx error-log shows the following: 2012/09/02 22:09:58 [error] 20772#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.102, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "192.168.1.10:81" Im stuck at this point. Im not sure if it matters but I also have spawn-fcgi and php-fpm to serve up PHP files on this site, but that should be 100% seperate from the perl-fastcgi setup, different port, etc.. How can I troubleshoot this?

    Read the article

  • UFW blocks SSL connections Varnish/Apache2 on Ubuntu 12.04

    - by user1383815
    I have installed Virtualmin on a Ubuntu 12.04 server and I'm using LAMP stack with Varnish (:80) in front of Apache (:8000). However, I cannot access https when UFW is enabled. When I disable UFW, all works fine. Here is what UFW logging shows when I attempt to access a website via https: Dec 14 05:42:29 localhost kernel: [64491.327263] [UFW BLOCK] IN=eth0 OUT= MAC=e4:11:5b:e5:ef:8c:00:d0:02:8f:f0:00:08:00 SRC=MY_IP_ADDRESS DST=SERVER_IP_ADDRESS LEN=52 TOS=0x00 PREC=0x00 TTL=115 ID=2524 DF PROTO=TCP SPT=56430 DPT=20000 WINDOW=8192 RES=0x00 SYN URGP=0 Here is my UFW ruleset: $ ufw status Status: active To Action From -- ------ ---- 2221 ALLOW Anywhere 10000 ALLOW Anywhere 80 ALLOW Anywhere 21 ALLOW Anywhere 8000 ALLOW Anywhere Apache Secure ALLOW Anywhere 2221 ALLOW Anywhere (v6) 10000 ALLOW Anywhere (v6) 80 ALLOW Anywhere (v6) 21 ALLOW Anywhere (v6) 8000 ALLOW Anywhere (v6) Apache Secure (v6) ALLOW Anywhere (v6) Does anyone have any pointers how to fix this problem? Thank you for your time.

    Read the article

  • sqlcmd backup script failing

    - by Bryan
    I'm trying to use a simple batch script to backup a local instance of SQL Express 2012, as follows: @echo off SET BACKUP_DIR=E:\BackupData SET SERVER=.\\sqlexpress set dom=%date:~0,2% set month=%date:~3,2% set year=%date:~6,4% set file=%year%-%month%-%dom% sqlcmd -S %SERVER% -d master -Q "exec sp_msforeachdb 'BACKUP DATABASE [?] TO DISK=''%BACKUP_DIR%\?.Full.%file%.bak''' The script is failing to run with the following error: Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Client unable to establish connection due to prelogin failure. This is on Server 2008 R2, my SQL database (on localhost) instance is named SQLEXPRESS. There is an instance of SQL Express 2008 on the system (hence client 10.0). The database is configured to use a trusted connection, and the .net desktop software deployed on our network PCs is able to access the database without any problem. Am I missing something obvious here, I've done a fair amount of searching for this error message, and haven't found anything that has been particularly useful so far.

    Read the article

  • Not receiving bounced emails from PHPList

    - by user1780242
    I am sending a test campaign to my own address and to a fictitious nonsense address and I am receiving the email at my account but I am not receiving any bounces. My settings in the config.php file are: $message_envelope = '[email protected]'; $bounce_protocol = 'pop'; $bounce_mailbox_host = 'localhost'; $bounce_mailbox_user = '[email protected]'; $bounce_mailbox_password = 'XXXXXX'; What's the next step in figuring out the problem? I also tried both variations of the following: $bounce_mailbox_port = "110/pop3/notls"; #$bounce_mailbox_port = "110/pop3"; I am running a Godaddy Centos 6 VPS and Plesk 11.

    Read the article

  • How do I set up an IP address on a Linux VM running in VM Player so I can access it from my Windows 7 host?

    - by BradyKelly
    I have just installed an Openbravo appliance on my Windows 7 VM Player host. I am now staring at a command prompt that tells me to go to http://localhost to access the ERP system, but I cannot find any browser on the appliance. I am guessing I should rather follow their advice to configure an IP address for the Linux VM and just access that from a Windows browser on my host. How do I go about this? More specifically, How do I choose a local IP address to assign? How do I set things up so that this IP address is visible to my Windows host? Their help says to assign an DNS, to make the server visible to the internet, but internet visibility per se is not needed. How should I interpret or adapt this help for that? Finally to make the IP address available to the Internet, assign some DNS servers to it: $ echo "nameserver IP_DNS1" /etc/resolv.conf $ echo "nameserver IP_DNS2" /etc/resolv.conf

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >