Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 244/312 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • How to Install OS without DVD and USB boot

    - by Timothy James Reed
    I just purchased a used Dell F1D 1U rack mount server and would like to install Ubuntu or ESXi with Virtual Disks or anything for that matter. I'v read that Dell's have built in DRAC so you can access it remotely. There are 3 ethernet plugs in the back but I dont know which one to use. In the BIOS it says I can configure Remote access on [com1] or [com2] not sure if that is ethernet 1 & 2. I also set it up so to use a static IP adress. Thats as far as I have gone. Not sure what to do next. I'v Tried to do a PXE server with TFTP but get stuck at Error "cant locate file" or something like that. Not even sure I want to go that route anymore because of all the hassel of editing files. All my computers are OSX or Linux and the only Windows I have is via VMWare. What steps to i do now?

    Read the article

  • CloudFront for dynamic content CDN

    - by Elad Lachmi
    I would like to use CF as a CDN for my entire site, including static and dynamic content. I have been using CF for static content for a while and I am very happy with the results. I am now doing POC of putting the web server completely behind CF. For the dynamic content I created a new distribution and set the origin to be my web server. Right now I'm looking to test the solution, so I have the web server on the original domain and the CF distribution on the amazon domain. This works with the exception of HTTPS urls and POST requests. For HTTPS requests, I see the requests are forwarded to the original site domain for now, but how will CF handle them when I move the distribution to the www cname? What configuration changes should I make so that CF forwards HTTPS requests to the origin? For POST requests, I want the post to be made to the origin server. Can I set this up in CF? Finally, the site has membership. Can I configure CF to pull all content from the origin if the user is logged in? Sorry for the long question. I'm a little lost and documentation for dynamic CF is still kind of scarce. Thank you!

    Read the article

  • Weird mouse behaviour. Debian wheezy

    - by DevNoob
    When I move my mouse slowly over the desktop the pointer jumps often a few pixels (one or two) in the opposite direction of which I move my mouse. Horribly when trying to set the cursor around some semicolons in eclipse. I guess this is the result of a wrong set resolution of it. I suppose this is because the mouse was set initially really fast and even if I do xset 1/2 3, the mouse is just to fast and unprecise for me. It aready tried to configure the xorg.conf like this: Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" Option "Device" "/dev/mouse" Option "Protocol" "Auto" Option "Name" "Logitech G3" Option "Resolution" "2000" EndSection But with no effect. Maybe because there is no /dev/mouse. This ist the content of dev. Maybe you can tell me which one is the mouse. autofs block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dvd dvdrw fd fd0 full fuse fw0 hidraw0 hidraw1 hpet input kmsg log loop0 loop1 loop2 loop3 loop4 loop5 loop6 loop7 loop-control MAKEDEV mapper mcelog mem net network_latency network_throughput null nvidia0 nvidiactl oldmem port ppp printer psaux ptmx pts random rfkill root rtc rtc0 sda sda1 sda2 sda3 sda5 sda6 sda7 sda8 sdb sdb1 sg0 sg1 sg2 shm snapshot snd sndstat sr0 stderr stdin stdout tty tty0 tty1 tty10 tty11 tty12 tty13 tty14 tty15 tty16 tty17 tty18 tty19 tty2 tty20 tty21 tty22 tty23 tty24 tty25 tty26 tty27 tty28 tty29 tty3 tty30 tty31 tty32 tty33 tty34 tty35 tty36 tty37 tty38 tty39 tty4 tty40 tty41 tty42 tty43 tty44 tty45 tty46 tty47 tty48 tty49 tty5 tty50 tty51 tty52 tty53 tty54 tty55 tty56 tty57 tty58 tty59 tty6 tty60 tty61 tty62 tty63 tty7 tty8 tty9 ttyS0 ttyS1 ttyS2 ttyS3 uinput urandom usb vcs vcs1 vcs2 vcs3 vcs4 vcs5 vcs6 vcs7 vcsa vcsa1 vcsa2 vcsa3 vcsa4 vcsa5 vcsa6 vcsa7 vga_arbiter vmci vmmon vmnet0 vmnet1 vmnet8 vsock watchdog xconsole zero So my question is: How do I setup my mouse correctly in Debian wheezy?

    Read the article

  • VirtualBox VM running web server not accessible via external IP

    - by mwigdahl
    I have a Windows 7 machine running VirtualBox with an Ubuntu guest. The guest has a Bitnami LAMP stack installed. I have the guest configured for Bridged networking, and I can access the guest web server just fine from other machines on my LAN using the guest's IP. I'm trying to configure port forwarding so that I can access the web server from outside my LAN. (The router is a 2WIRE model as I'm on ATT's UVerse). I've set up port forwarding for ports 80 and 443 to the guest's IP in a similar manner to how I had them set up for my previous, physical web server, which worked just fine. However, I cannot seem to access the new, virtual web server using my external IP on the forwarded port. I suspected Windows Firewall issues on the host, but disabling it didn't solve the issue. Anyone have advice on what I should try next? EDIT: I've now attempted disabling the firewall on the guest with sudo ufw disable -- that doesn't seem to help either. However, after checking the router's port forwarding in more detail I may see the problem. My VM is named "linux" and in the router's configuration pages it shows up inconsistently. Sometimes it reports with a valid LAN IP and other times it doesn't show up with any IP. Even when it shows the correct IP the router indicates that it is disconnected. Could this be an indication that the 2WIRE router doesn't play well with VirtualBox's bridged networking mode?

    Read the article

  • Enabling JMX for proxool with tomcat

    - by dialt0ne
    I am trying to get proxool's MBeans available so that I can see/manipulate them with jconsole. I have jconsole working, but I don't see anything related to proxool. The system is using Sun Java 1.5.0_17 (I know, I know... I'm working with the developers to upgrade). JMX is enabled by modifying $JAVA_OPTS in my tomcat 5.5 startup script: SJO="$SJO -Dcom.sun.management.jmxremote" SJO="$SJO -Dcom.sun.management.jmxremote.port=4998" SJO="$SJO -Dcom.sun.management.jmxremote.authenticate=false" SJO="$SJO -Dcom.sun.management.jmxremote.ssl=false" JAVA_OPTS="$JAVA_OPTS $SJO" I have proxool configured with JNDI in server.xml: <GlobalNamingResources> <Resource name="jdbc/database" auth="Container" type="javax.sql.DataSource" factory="org.logicalcobwebs.proxool.ProxoolDataSource" user="username" password="password" proxool.driver-url="jdbc:oracle:thin:@fqdn.example.com:1521:MYSID" proxool.driver-class="oracle.jdbc.driver.OracleDriver" proxool.alias="mysid" proxool.maximum-connection-count="20" proxool.statistics="20s,5m,15m" proxool.statistics-log-level="INFO" proxool.jmx="true" proxool.verbose="true" /> </GlobalNamingResources> My test .jsp can run queries and I can see it using the connections with the proxool admin servlet, but I'm unsure if there's more I need to configure in tomcat or proxool to get JMX functioning. Advice? jmxproxy info edit: The jmxproxy servlet is working - when I go to the URL http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=*:type%3DRequestProcessor,* the results are: OK - Number of results: 2 Name: Catalina:type=RequestProcessor,worker=http-8080,name=HttpRequest0 modelerType: org.apache.coyote.RequestInfo bytesSent: 0 requestBytesSent: 0 contentLength: -1 bytesReceived: 0 requestProcessingTime: 1297983483666 globalProcessor: org.apache.coyote.RequestGroupInfo@32dc51c8 requestBytesReceived: 0 serverPort: -1 stage: 0 requestCount: 0 maxTime: 0 processingTime: 0 errorCount: 0 Name: Catalina:type=RequestProcessor,worker=jk-127.0.0.1-8009,name=JkRequest794 modelerType: org.apache.coyote.RequestInfo virtualHost: tomcatserver.example.com bytesSent: 0 method: GET remoteAddr: 172.30.3.51 requestBytesSent: 0 contentLength: -1 workerThreadName: TP-Processor15 bytesReceived: 0 requestProcessingTime: 9 globalProcessor: org.apache.coyote.RequestGroupInfo@1e7d3b8e protocol: HTTP/1.1 currentQueryString: qry=*%3Atype%3DRequestProcessor%2C* requestBytesReceived: 0 serverPort: 4999 stage: 3 requestCount: 0 maxTime: 0 processingTime: 0 currentUri: /manager/jmxproxy/ errorCount: 0 And more to the point http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=Catalina:type%3DEnvironment,resourcetype%3DGlobal,name%3DProxool yields: OK - Number of results: 0

    Read the article

  • File uploads and client_max_body_size in nginx + gunicorn + django

    - by carlosescri
    I need to configure nginx + gunicorn to be able to upload files greater than the default max size in both servers. My nginx .conf file looks like this: server { # ... location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 60; proxy_pass http://localhost:8000/; } } The idea is to allow requests of 20M for two locations: /admin/path/to/upload?param=value /installer/other/path/to/upload?param=value I've tried to add location directives at the same level than the one I've pasted here (getting 404 errors) and also tried to add them inside the location / directive (getting 413 Entity Too Large errors). My location directives look like these in their simplest form: location /admin/path/to/upload/ { client_max_body_size 20M; } location /installer/other/path/to/upload/ { client_max_body_size 20M; } But they don't work (actually I tested lots of combinations and I'm desperate thinking about this. Please, help If you can: What settings do I need to set to make this work? Thank you so much!

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • Exim: send every emails with a predefined sender

    - by Gregory MOUSSAT
    We use Exim on our servers to send emails only from local automated users, as root, cron, etc. We have to specify every possible users into /etc/email-addresses. For example: root: [email protected] cron: [email protected] backup: [email protected] This allow us te receive every email generated. The problem is when we add a user for whatever reason (for example when we add a package, some add a user), we can forget to add this user to /etc/email-addresses. Most of the time it's not a problem, but this is not clean. And the overall method is not clean. We'd like to configure Exim to send every emails with the same source address. i.e. every sent email comes from [email protected] One way could be to use a wildcard or a regular expression into /etc/email-addresses but this is not supported. I don't currently understand Exim enought to figure out how to modify this in a way or another. Ideally, Exim should look into /etc/email-addresses first, and if no match it use the predefined address. But this is very secondary. There are two places where this address is used: 1. when Exim send the FROM: command to the smtp server 2. inside the header edit: The rewrite section is the original one from Debian begin rewrite .ifndef NO_EAA_REWRITE_REWRITE *@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs *@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs .endif (comments removed)

    Read the article

  • Need to have access to my office PC from my laptop hopping through two VPN servers

    - by Andriy Yurchuk
    Here's the illustration of what I have ( http://clip2net.com/s/2fvar ): My office PC with it's IP of 123.45.e.f. Office VPN, which I will connect to from my VPS to get to my office PC. My own VPS, which I use as a: client to connect to office VPN (through vpnc, which creates a tun0 with 123.45.c.d IP address); VPN server my laptop can connect to (OpenVPN, tun1, 10.8.0.1) My own laptop I will use as a VPN client to connect to VPS OpenVPN server (will create a tun0 with 10.8.0.2 IP address) Now what I have to do is to allow my laptop to connect to at least my office PC, but preferably to all the 123.45.x.x subnet. Please advice on how to best configure OpenVPN, routing, iptables or whatever else is needed on my VPS so that my laptop could gain access to my office PC. P.S. The reason I'm hopping through my VPS is that being connected to the office WiFi I cannot access my office PC and I cannot connect to office VPN (which is another way to access my office PC). The only way to access my PC from office WiFi I have is hopping though an outside network.

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • Postfix "mail-to-script" pipe only delivers empty messages

    - by user68202
    i have a problem here. I want that a incoming email is piped to a php script in the system through postfix. My System is running with ispconfig 3, postfix and dovecot (< virtual mailbox users are saved in mysql). I looked already into this one: How to configure postfix to pipe all incoming email to a script? ... the script is executed, but no "message" is delivered to the script. My setup so far: In ISPConfig 3 i have set up the following email route: Active Server Domain Transport Sort by Yes example.com pipe.example.com piper: 5 excerpt from my postfix master.cf: piper unix - n n - - pipe user=piper:piper directory=/home/piper argv=php -q /home/piper/mail.php so far it is working great (mail sent to [email protected]) (mail.log): Jun 21 16:07:11 example postfix/pipe[10948]: 235CF7613E2: to=<[email protected]>, relay=piper, delay=0.04, delays=0.01/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via piper service) ... and no errors in mail.err the mail.php is sucessfully executed (its chmod 777 and chown'ed to piper), but creates a empty .txt file (normally it should contain the email message): -rw------- 1 piper piper 0 Jun 21 16:07 mailtext_1340287631.txt the mail.php script ive used, is the one from http://www.email2php.com/HowItWorks if i use their (commercial) service to pipe an email to the mail.php (in a apache2 environment) through a provided "pipe-email", the message is saved sucessfully and complete. But as you can see, i dont want to use external services. -rw-r--r-- 1 web2 client0 1959 Jun 21 16:19 mailtext_1340288377.txt So, whats wrong here? I think it has something to do with the "delivering configuration" in my system...

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • Setting up port forwarding for web server

    - by reyjavikvi
    This could belong on Super User, but I thought this place was more appropiate. I want to run Apache in my computer and want to make it available to the outside world to test a couple things. Apparently, I have to go into my router's (a TP-LINK TD 8910G) settings and forward port 80 to my PC's IP. So far so good. Thing is, since the router uses a web based interface and it's kind of stupid, it told me that since I was using port 80 for this, I should access its settings through port 8080. Maybe it can't detect requests coming from the LAN, I don't know. Point is, now neither port can't access the configuration, and I can't access Internet. Specifically, trying to access anything (including 192.168.1.1, the router's settings) through port 80 turns up a blank page (maybe if I had the server running in my computer I'd get something, but I don't want to risk trying, I had to reset the router and restore the settings), and port 8080 gives a "Can't establish connection" error in Firefox (and similar ones in other browsers). Is there a way to configure the router to not redirect requests coming from inside the network? I'm a beginner with this stuff, so please try to explain in a simple way. If this is more appropiate in Super User, I'm sorry.

    Read the article

  • How to create a static IP on Windows Server 2008 R2 so I can access the server remotely

    - by Aesir
    I have just purchased a HP Proliant N40L which I am intending to use as a NAS, learning tool and just in general something to mess around with. As a student via the Microsoft dreamspark program I can get a free copy of Windows Server 2008 R2 which I am using as the OS. So that I can remote to the box from outside of my local network and so that I can stream media from it to my PS3, I have read that I need to create a static IP for the server and use port forwarding to forward to this IP so I can remote in. Is this correct? I am not really sure how to do this and if I need to make these changes on my router configuration, on the OS or both. I am a novice when it comes to networking however most resources for Windows server 2008 R2 seem to assume a fair amount of experience already. I realise that using this particular OS may seem like overkill for what I currently wish to do with it (stream content to other devices and backup) but as I can get a copy for free it seems sensible. Edit: From reading answers posted I feel I should give more information. I have now tried to add a static IP address using my router configuration settings. I have used the getmac command to get the mac address of the server. My ISP is Virgin Media and I have gone to the LAN IP section and I have added an IP address to the DHCP Reservation Lease Info. I can now use remote desktop connection internally to remote to the server (so I am assuming assigning this IP has worked). How do I configure this on the OS as well? I am also unsure on how I would remote to this machine outside of my local network?

    Read the article

  • Change the order of IP addresses returned by ifconfig?

    - by erikcw
    I have an Ubuntu server with several IP addresses attached to it. 127.0.0.1 is listed as venet0 by ifconfig. I'm using Chef to configure the server. The problem is that chef is listing 127.0.0.1 as the IP address for the server instead of one of the server's "real" IPs. (apparent "ohai ipaddress" uses the first IP listed by ifconfig to determine the server's IP). How can I change the order so the servers main IP is listed first instead of the 127.0.0.1? Can venet0 be deleted and venet0:0 be "promoted" to take its place since 127.0.0.1 is already listed in the "lo" interface? lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16700 (16.7 KB) TX bytes:16700 (16.7 KB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:7622207 errors:0 dropped:0 overruns:0 frame:0 TX packets:8183436 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2102750761 (2.1 GB) TX bytes:2795213667 (2.7 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX1 P-t-P:XXX.XXX.XXX.XX1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX2 P-t-P:XXX.XXX.XXX.XX2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 route -n route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 0.0.0.0 192.0.2.1 0.0.0.0 UG 0 0 0 venet0

    Read the article

  • Windows 7, Printer is unavailable until PC reboot

    - by Cjs
    We are having a problem that I haven't seen before and can find no answers to online. A desktop running Windows 7 is unable to print using network printers. When the user tries, no matter what printer it is, he get's the following message when using Microsoft Office applications: "Current printer is unavailable. Select another printer." When the end user uses Outlook we get the message: "Printing is not available. There are no printers installed. You can select and configure a printer in Windows Control Panel." Now here is the confusing part, if we reboot the pc it works fine for a little while and then goes back to the same old same old. The printers are working fine for every other user as well so I believe it is the user's machine. If anyone has any ideas please let me know. Edit: I'd like to add in that some people were having luck disabling snmp for the printers. Restarting the print spooler doesn't seem to do anything.

    Read the article

  • How do I deliver mail for wildcard addresses to a particular user/alias/program?

    - by David M
    I need to configure sendmail so that mail delivered for wildcard addresses is accepted for delivery and then delivered to a user, alias, or directly to a script. I can rewrite the envelope/headers any number of ways, but I don't know how to accept the wildcard address when it's provided in RCPT TO: Everything I've tried so far winds up with a 550 user unknown error. So here's a specific example: I want to be able to handle any address that consists of a series of digits followed by a dot followed by a word, then pipe that to a script. If the headers get rewritten, that's OK, but I need the envelope to contain the actual Delivered-To address. Here's the sort of SMTP session I need: 220 blah.foo.com ESMTP server ready; Thu, 22 Apr 2010 20:41:08 -0700 (PDT) HELO blort.foo.com 250 blah.foo.com Hello blort.foo.com [10.1.2.3], pleased to meet you MAIL FROM: <[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO: <[email protected]> 250 2.1.5 <[email protected]>... Recipient ok I tried some stuff with regex maps, but I never got past 550 user unknown.

    Read the article

  • How can I password protect an IIS directory with only FTP access?

    - by Tony Adams
    How can I password protect an IIS directory when I only have FTP access to the server? I can't adjust any IIS settings or add users or anything like that. The answer to: IIS Basic Authorization ala .htaccess/.htpasswd in apache does not help as I only have access to the server via FTP. I just need to password protect a directory. I've tried several variations of a web.config file. I can get a basic HTTP auth form to pop up when a user attempts to load a page from my test directory, but I can't configure the authentication part. The server complains that: Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. whenever I add an <authentication> section to my web.config. I'm grateful for any help anyone can offer. Edit: I don't know what version of IIS is running on this server, but here is the server tag from error messages: Version Information: Microsoft .NET Framework Version:1.1.4322.2490; ASP.NET Version:1.1.4322.2494

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • master-slave datastore replication, automatic failover, and wackamole

    - by z8000
    I have 2 dedicated servers provisioned for my next project's datastores. The datastores are configured for master-slave replication. There's no inherent automatic failover but I of course want this. That is, I'd love for access to the master datastore to always just work without having to configure a client library to detect when a master is down and failover to the slave. I've seen Wackamole which is based on the Spread Toolkit. You provide Wackamole with a set of IPs and a bunch of nodes, and regardless of the up/down state of any of the nodes, those IPs will stay available/up. Wackamole detects when a node goes down and ARPs the IP(s) that were up on the now-down node. It's pretty neat actually. So, my thought was to use Wackamole to keep the 2 virtual private IPs available/up. Clients would then just always use the same private IP to access the master datastore and the same but distinct IP for the slave datastore, even if those IPs were hosted on the same node. My datastore servers are accessed over a private network. I am unsure if this messes with Wackamole though. Is this lunacy? How do you generally handle automatic failover of private services like a datastore. FWIW, it shouldn't matter but the datastore is Redis. I don't want to hear "use mySQL" please :) Thanks.

    Read the article

  • Can't find gnutls ibrary when executing rpmbuild under non-root

    - by Rilindo
    I am trying to build ntgs from the latest source, using the .spec from rpmforge - as non-root via rpmbuild. During the compile, it fails at this step: checking for GNUTLS... no configure: error: ntfsprogs crypto code requires the gnutls library. error: Bad exit status from /var/tmp/rpm-tmp.78913 (%build) However, I can compile it successfully outside of rpmbuild. So it sounds like it just the matter of library being seen during the build. However, I can confirm that rpmbuild can see the library that gnutls resides: [foo@bar ~]$ rpmbuild -E '%{_libdir}' rpmbuild/SPECS/ntfsprogs.spec /usr/lib Library location: [foo@bar ntfs-3g_ntfsprogs-2012.1.15]$ /sbin/ldconfig -p | grep -i gnutls libgnutls.so.13 (libc6) => /usr/lib/libgnutls.so.13 libgnutls.so (libc6) => /usr/lib/libgnutls.so libgnutls-openssl.so.13 (libc6) => /usr/lib/libgnutls-openssl.so.13 libgnutls-openssl.so (libc6) => /usr/lib/libgnutls-openssl.so libgnutls-extra.so.13 (libc6) => /usr/lib/libgnutls-extra.so.13 libgnutls-extra.so (libc6) => /usr/lib/libgnutls-extra.so What would cause the problem of the library not being seen when you build a RPM? EDIT: Oh yeah, I am running Centos 5.5.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >