Search Results

Search found 15731 results on 630 pages for 'browser tabs'.

Page 520/630 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • httpd 2.2.15 + suPHP + suExec + php5 = permission and information ?

    - by Prix
    Hi, i am currently playing around with suexec, suphp, php5 on my apache on slackware 13.1. Everything is installed and working properly but now i did like to got further into the directory permissions and at suphp settings and options available. initially i was planning to leave suphp disabled unless a virtualhost has it specified to be enabled but it does not seem to work, see sample: mod_php.conf which is included in my httpd.conf # # mod_php & mod_suPHP - PHP Hypertext Preprocessor module # # Load the PHP module: LoadModule php5_module lib/httpd/modules/libphp5.so # Load the suPHP module: LoadModule suphp_module lib/httpd/modules/mod_suphp.so <IfModule mod_php5.c> # Tell Apache to feed all *.php files through PHP. If you'd like to # parse PHP embedded in files with different extensions, comment out # these lines and see the example below. <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> </IfModule> <IfModule mod_suphp.c> # This option tells mod_suphp if a PHP-script requested on this server (or # VirtualHost) should be run with the PHP-interpreter or returned to the # browser "as it is". suPHP_Engine off </IfModule> With the above first sample it makes suPHP and PHP not work if i comment out the php5 stuff but the module it will run just fine ... So my first question is, how could i possible make this setup work ? Leave suPHP disabled using php5 by default and if a virtualhost has suPHP enabled it will disable php5 and use suPHP. if any information is lacked here please let me know and i will update with any additional information you may need. Thanks in advance.

    Read the article

  • Cannot access firewalled jboss server from Internet Explorer

    - by Simon Gibbs
    I've produced a website for a client One Single Menu using JBoss and hosted it on Rackspace Cloud Servers running Ubuntu's Maverick Meerkat. Following advice, I esablished some iptables rule to protect jboss: iptables -I INPUT 1 -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -A INPUT -j DROP Now, several versions of IE on several computers on at least two different ISPs cannot access the onesinglemenu.com. Curl from within the datacenter, Firefox, and Safari on the same ISPs can all access the server fine. I even tried IE and Firefox on the same computer and IE failed but Firefox worked. The error behaviour is that IE hangs on connecting without reporting an error, even after a minute or so. No page is displayed at all. I find it quite odd that I'm having a browser specific connection issue, but it appears to be the case. Help!

    Read the article

  • Google Chrome doesn't delete my browsing history correctly

    - by Derfder
    I have deleted everything that I could from my browser history: chrome://settings/clearBrowserData I checked everything and select the begining of time Then when I access browsing history: chrome://history/ There is nothing (as I expected), or to be precise No history entries found. The problem is that I still see my specific search url with very specific query I have made a month ago, when I start typing the url of the website into chrome address bar. How is that possible? Where is Google stroing these data. How to get rid off them completely? I want to mention that my autosuggestion options look like this: So, what else should I delete to remove everything from autosuggestions? Right now it has some specific URLs (subpages, pages with very specific search query I have made in a month or so). I have tried restarting Chrome and restarting my computer, but the urls are still in the autosuggestion. Also I am unable to turn off the autosuggestion, even I have unchecked that option in settings. My Google Chrome version is: Version 27.0.1453.116 m (probably the latest) Btw. in Firefox deleting the history works as expected. So, I guess that this has nothing to do with the operating system I am using (Windows 7), but only it's an issue with Chrome itself.

    Read the article

  • Can't connect to research.microsoft.com on home Qwest DSL connection

    - by rakingleaves
    I have a puzzling issue regarding accessing research.microsoft.com from my home Qwest DSL connection. By default, I frequently get timeouts when accessing research.microsoft.com from Firefox, Safari, or Chrome on my Mac. I also cannot access the site from Internet Explorer in a Windows VM. However, I am able to access the site through proxify.com, so I know the site is not down. Furthermore, I haven't noticed problems accessing other sites (in particular, www.microsoft.com works fine). Also, I can access research.microsoft.com when I'm connected to networks other than my home Qwest DSL connection. Together, the above make me suspect a problem with either my router (Airport Express) or, more likely, my ISP. Anyone have any thoughts on how I can narrow down the problem further? I could call my ISP and tell them the above, but my feeling is that probably won't get me very far. I can get by browsing research.microsoft.com through a proxy, but it would be nice to figure out what's going on here and fix the problem. Oh, the only relevant discussion I found via Google was here: http://forums.whirlpool.net.au/forum-replies-archive.cfm/1311734.html Update: Thanks to those who have tried to help! I found one other thing while Googling that may be vaguely relevant: http://thedaneshproject.com/posts/supportmicrosoftcom-not-working-behind-squid/ Disabling the Accept-Encoding headers in Firefox actually didn't make a difference for me. I just thought the above might spark some other ideas about how mishandling of HTTP headers somewhere might be causing this problem. Thanks again! Another update: In case anyone is still thinking about this; I've found that I can't surf research.microsoft.com using the links text-based browser, but I can reliably download individual files with wget. Maybe that helps?

    Read the article

  • download management

    - by Jonathan
    I download many files, usually 2 or 3 a day, often 10ish. Some of them are duplicates because I just can't be bothered to find the original in my downloads folder. I have previously tried DAP and used that to create a new subfolder for each day's download. yet I have found this insufficient as sometimes I wish to find files by name/file type or I have multiple parts of downloads over more than one day. Another problem I have found is zips/rars/etc after downloading them and extracting them I then have the zip and the folder. I like it like on a Mac where it automatically extracts the zip after it has been downloaded and removes the zip. What I'd like to be able to do is sort the downloads by date, but dynamically so they are just in the big downloads folder, but I can just press a button and it will show me all the files from a particular site, or from a particular day or by a certain file type. Is there any software that will do this? I use Chrome as a browser but also have Firefox and like that. Jonathan

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • nginx 2 symfony2 web application, one ip no domain

    - by Krzysztof Koch
    I have irritating with nginx. I set up in /usr/share/nginx/www/firstapp one application and in /usr/share/nginx/www/secondapp. in my default conf i setup that in / root localization i want first app: when write 9.9.9.9 in browser show me first app, and when i write 9.9.9.9/makeup, there not show me seccond app. Why first app displays me good, and seccondapp cannot? Please help me. Sorry for quality here pasterbin code: enter link description here server { listen 80; server_name localhost; root /usr/share/nginx/www/firstapp/web; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log error; # strip app.php/ prefix if it is present rewrite ^/app\.php/?(.*)$ /$1 permanent; location / { root /usr/share/nginx/www/firstapp/web/; index app.php; try_files $uri @rewriteapp; } location /makeup/ { alias /usr/share/nginx/www/seccondapp/web/; index app.php; try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ ^/(app|app_dev)\.php(/|$) { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/lib/php5-fpm/www.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; #fastcgi_param SERVER_PORT 80; }

    Read the article

  • Cannot Resolve Host Or Access Website Through Router

    - by Boris_yo
    This is weird. I am on Windows XP with Edimax BR 6204Wg. I have 3 devices - 2 laptops and 1 smartphone. 1st laptop and smartphone are connected through WiFi to router and 2nd laptop is connected through LAN to router. Before firmware upgrade i did not try to access website but after firmware upgrade to latest version: http://www.edimax.eu/en/support_detail.php?pd_id=11&pl1_id=3#02 i had problems resolving host, pinging, tracerting and accessing website. Sometimes ping and tracert work but i cannot access website and sometimes i can access website but ping and tracert do not work. Weird? I downgraded to previous version and no changes. If i can no longer access that website through Internet Explorer, i can access it in Firefox. I tried deleting cookies, clearing cache and that seem not make difference. Switching LAN port did not make difference. When i disconnect router and connect laptop through LAN to internet modem, everything is normal. I tried resetting router, resetting to factory default settings and all did not help. At the moment i can access website on laptop connected through LAN from Firefox and Internet Explorer, but on my smartphone i can access website only with Opera but not with built-in browser and Skyfire. UPDATE: I just could only access with Internet Explorer but not with other browsers on my PC. Minutes later i could access with all browsers. But on smartphone i could only access with Opera and not with other browsers. I am confused. I also determined that sometimes i can access and sometimes can't. What is also weird is that when ping and tracert cannot resolve host, i still am able to access website.

    Read the article

  • Creating an Apache Virtual Directory, but updating Active Directory DNS

    - by SnoConeGod
    Hello all, I'm just getting started with using the Zend Framework and am following a recommended procedure where I am supposed to create an Apache Virtual Directory for the public-facing portion of a new Zend project. I don't THINK I had any issues creating the Virtual Directory, but my knowledge of the required DNS changes is rather lacking. The dev server I'm using is on a Microsoft Windows Active Directory domain, so I've added A records for both the server name and the subdomain. Still, trying to browse to the site from a Windows 7 PC isn't working properly. What am I missing? What's the proper set of steps for getting an Apache-served subdomain to appear properly in a peer computer's web browser? Details below: server: Debian command-line only, freshly installed today with Zend Server CE LAMP stack server name: ZENDEV subdomain: SQUARE.ZENDEV AD Domain functional level: 2008 mixed (run by a mishmash of 03 and 08 servers) attempting to visit the sites: http://square.zendev and http://square.zendev.domain.local (name of domain redacted, but using the local (not com) suffix) Apache Virtual Directory added to httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/var/www/square/public" ServerName square.localhost </VirtualHost> Is this only a problem with DNS? Or with DNS and my Virtual Directory? Thanks! John

    Read the article

  • Can Safari 5.1 for Mac OS display favicons for bookmarks in the Bookmarks Bar?

    - by Greg R.
    When bookmarking a web site, most contemporary browser will display the site's favicon next to the bookmark, both in the bookmark view and the bookmark toolbar. This is a useful feature. In the bookmark toolbar you can edit the name of the bookmark to be blank, effectively leaving the favicon there as an easily identifiable "button" from which to launch the bookmark. This allows you to make more effective user of the space in the bookmark toolbar. I use this approach effectively in Firefox, Chrome, and IE. For example, here is a portion of my Bookmarks Toolbar from Firefox: However, in Safari, no favicon is ever displayed for bookmarks. In the full bookmark view only a generic globe icon is displayed. In the Bookmark Bar in Safari, no icon at all is displayed. Which means the habit of removing the bookmark name & leaving the favicon is useless. Here's what the same configuration (synced between browsers via Xmarks) looks like in Safari. That blank space is where the favicons should be. The boomark is there -- if you hover over it, the blank space changes color to indicate the presence of a bookmark and a tool tip will with the URL will pop up after about two seconds. However, it's really quite unusable. So. The question: is there an extension, plug-in, or modification of some sort that will enable the display of favicons for bookmarks in Safari (OS X Lion 10.7.3 , Safari version 5.1.3)?

    Read the article

  • wget hangs in http request sent awaiting response in some sites

    - by gkr
    Using Ubuntu 12.04. wget hangs in http request sent, awaiting response... in some sites. Browser's are not opening sites that are failed in wget. But in WinXP everything works. This works gkr@gkr-desktop:~/Documents/curl$ wget google.com --2012-06-12 21:29:37-- http://google.com/ Resolving google.com (google.com)... 74.125.236.174, 74.125.236.160, 74.125.236.161, ... Connecting to google.com (google.com)|74.125.236.174|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.google.com/ [following] --2012-06-12 21:29:38-- http://www.google.com/ Resolving www.google.com (www.google.com)... 74.125.236.179, 74.125.236.180, 74.125.236.176, ... Connecting to www.google.com (www.google.com)|74.125.236.179|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.co.in/ [following] --2012-06-12 21:29:38-- http://www.google.co.in/ Resolving www.google.co.in (www.google.co.in)... 74.125.236.184, 74.125.236.191, 74.125.236.183, ... Connecting to www.google.co.in (www.google.co.in)|74.125.236.184|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html.3' [ ] 13,383 --.-K/s in 0.04s 2012-06-12 21:29:39 (308 KB/s) - `index.html.3' saved [13383] gkr@gkr-desktop:~/Documents/curl$ This site just stops/hangs in awaiting response. gkr@gkr-desktop:~/Documents/curl$ wget grooveshark.com --2012-06-12 21:27:29-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~/Documents/curl$ Thanks

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

  • Download JDK onto a remote server

    - by itsadok
    I want to get the latest JDK onto a server in a remote location. Downloading the JDK from Sun's website requires jumping through all kinds of hoops until you actually get the file. I'm not sure exactly if they use cookies or my IP address, but simply copying the file URL and trying wget on the server doesn't work. Googling for mirrors of the JDK, I could only find old versions. Right now I'm left with the option of downloading it into my computer, then uploading it to the server. This feels slow and stupid. Anyone got a better idea? EDIT: Thanks for all the replies. Just to clarify, as I'm writing this I'm rsyncing the 78MB file to my server. It should be done in about an hour, so it's not such a big deal. However, since this is not the first time I'm doing this, I was hoping for a better solution for next time. Solution: What I ended up doing was sudo aptitude install lynx-cur www-browser http://java.sun.com/javase/downloads/ From there it's mostly using the arrow and enter keys, and answering "Yes" to a lot of lynx security questions (about cookies and certificates). Thanks to resonator.

    Read the article

  • Cannot browse remote networks even with WINS configured

    - by paradroid
    As the NetBIOS protocol acts on Layer 2 and so is not routable, In order to enable network browsing of remote networks, WINS has been installed and configured on two domain controllers, both of which are on different networks. The WINS servers seem to be replicating with eachother, and each has 127.0.0.1 set as the Primary WINS Server in each of their LAN interface properties, with nothing entered for Secondary WINS Server. The DC which holds the PDC Emulator FSMO role has the Computer Browser service running and set to Auto start, and it has the WINS/NBT node type network setting at 0x8 (H-node - Hybrid node). Remote network browsing does not work. Is the WINS/NBT node type correct for this scenario? The reason why I think it may not be the right one is because I set the DHCP Server's 046 WINS/NBT node type option to 0x8 as well, after which the DHCP clients started to disappear from the Network folders. When that option is not set, does it default to B-node (Broadcast node)? Or could it be a problem with the WINS servers setup?

    Read the article

  • Domain names timing out after VPS IP change

    - by Fourjays
    I rent a CentOS 5 VPS from a UK-based provider, with DirectAdmin also installed. On Thursday night, they carried out planned maintenance to changed the two IPs I had been assigned to two new ones. On Friday, after the change had taken place, I updated my domain name records to reflect the IP change. Since then, all of the domains pointing to the VPS are timing out. Additionally, DirectAdmin was also not responding, but was was resolved by running the ipswap scripts as found in the DirectAdmin knowledgebase. It did not fix my domains though. I have contacted the VPS provider but I have been waiting for a response for some time now. I have checked again, and again, and all the IPs referenced in DirectAdmin are correct. If I go to the server IP in my browser it responds with "Apache is functioning normally." Email accounts on the server are also functioning correctly. But if I access a domain itself, it times out. Running a ping and a DNS look-up, I can confirm the nameserver IPs are correct. If I run a trace route it reaches an IP that is similar to my VPS IPs (last 2 blocks are different) before timing out (it never shows my server IP). I am relatively new to VPS management so don't have a vast wealth of experience with troubleshooting problems on them. I have checked all of the httpd configuration files, which don't seem to have any IP references in them at all. Looking in the Apache error logs, what errors there are do not coincide with times I have tried to access the site. Is this issue at my provider's end? Is there anything else I can check or test, to rule out post-IP-change problems with my server configuration? It was all running fine prior to the IP change.

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • Bypass proxy authentication [closed]

    - by Diego Queiroz
    My scenario: My network has a proxy that requires interative authentication. When I access any URL, an username and password is requested to enable navigation. I do have a valid username/password (this means I have permissions to access external content). I do not have access to the proxy server (any change to the proxy server is not an option). What I need: I need to bypass the interative authentication process and make it an automated authentication process. What I do NOT need/want: I do not need/want to hack the network. I do not need/want to access unauthorized content. In other words, I just need to find a way to "save" my password in the computer (security is not a problem) to allow application that does not support this kind of interative authentication to access the internet (like non-browser software that also uses HTTP port). My guess: My guess is to develop a new proxy server that will run in the local machine (eg, a proxy for the network proxy). This proxy server will access my network proxy, authenticate and forward the content. Of course this is a last resort. I prefer to not need to develop a proxy server. Does someone know other solution? (any operating system)

    Read the article

  • Apache2 shared server: default webpage

    - by Eamorr
    Greetings, I have an apache2 server with 4 domain names point to my server's single IP address. When I type in www.site1.com it serves pages from /home/eamorr/site1/index.php Same for www.site2.com, www.site3.com and www.site4.com However, when I type in to the address bar of a browser without the www, it always redirects to site1.com! i.e. site1.com - site1.com site2.com - site1.com site3.com - site1.com site4.com - site1.com How do I configure apache to do the following: site1.com - site1.com site2.com - site2.com site3.com - site3.com site4.com - site4.com Here is my default config: ServerAdmin [email protected] ServerName www.site1.com DocumentRoot /home/eamorr/sites/site1.com/www DirectoryIndex index.php index.html <Directory /home/eamorr/sites/site1.com/www> Options Indexes FollowSymLinks MultiViews Options -Indexes AllowOverride all Order allow,deny allow from all php_value session.cookie_domain ".site1.com" #Added by EOH for redirection RewriteEngine on RewriteRule ^([^/.]+)/?$ driver.php?uname=$1 [L] </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined I'd like to look at the domain name and then redirect to www.sitex.com. Is there an Apache rule to do this? I hope someone can help. My SysAdmin/apache2 config skill aren't the best. Many thanks in advance,

    Read the article

  • Recommendations for hosting large videos

    - by Clinton Blackmore
    I recently created and put a 45-minute, 300 MB video file on my website and told a mailing list about it. Checking my site stats, I see that I've used 20% of my "unlimited" bandwidth for the month. As I want to be able to have several videos like this, clearly, I need to consider other options. The appeal to hosting files as my own site (aside from the supposedly unlimited disk space and bandwidth), is to be able to have control over the format, resolution, and quality of the video(s), as well as to ensure that it is clear that I'm the copyright holder (although the videos will be under a creative commons license). I find that for the screencasts I'm making, having a high resolution (say 3/4 of 1024 * 768) really makes seeing what is going on on the screen easier. It is also always a plus to not have the experience marred by advertisements. One more wrench to throw in is that while the videos are non-commercial, they do promote a club, and it seems that that falls afoul of some terms of services (especially for free services; while free is very nice, I will certainly consider putting up some money.) What recommendations do you have for (fairly) long, high-resolution videos? Should I look in depth at sites like YouTube and Vimeo, should I be considering a filesharing site [I have no qualms with someone downloading the entire video first -- I wouldn't want to watch 45 minutes in my browser!], hosting files with Bittorent (ugh -- I think that'd reduce my audience), or should I be looking into other web hosts (and if so, who?)

    Read the article

  • MAMP Pro .xip.io, fixing urls with htaccess

    - by user3540018
    I've got all my websites set up with MAMP Pro. For instance, I got it set up, so when I go to example.com, the browser displays the website that's set up on my iMac. Now, I wanna get MAMP Pro to work so I can view my website on my other computers/devices (which are all hooked up on the same network.) So far all I had to do is check the checkbox "via Xip.io (LAN only)", and now I can view my website on my other computers/devices within my LAN by simply going to example.com.10.0.1.13.xip.io. Problem is, whenever I'm on this other computer/device, when I click on the links, I get 404 error. ie. whenever I go to example.com/news, I get the 404. But when I go to example.com.10.0.1.13.xip.io/news, THEN I get the right page. So in order to solve my problem I need to rewrite the urls. So whenever someone clicks on a link ie. goes straight to example.com/news, he'll go to example.com.10.0.1.13.xip.io/news. I don't want to change all the links in my MySQL file, but I believe I can do it simply with the htaccess file. I've opened the htaccess file and added the last two lines, but it just doesn't work. <IfModule mod_rewrite.c> RewriteEngine On # Send would-be 404 requests to Craft RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$ [NC] RewriteRule (.+) index.php?p=$1 [QSA,L] RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com.10.0.1.13.xip.io/$1 [R=permanent,L] </IfModule> Or perhaps I don't need to change the htaccess file, is there something that I could be missing in the MAMP Pro settings, or perhaps a MAMP extension that I need?

    Read the article

  • Lucid Lynx login issue

    - by Bart Silverstrim
    Recently upgraded from Karmic to Lynx. Upgrade seemed to go well, no noticeable issues. I logged in, and my window manager wasn't starting. An application would appear, but sans control buttons and border, so figured the windows manager needed to be given a swift kick. Opened a web browser and a quick google had me run "metacity --replace &" and everything popped up. I re-ran the Compiz configuration tool to enable my rotating desktop cube to the way I liked it, and had to reconfigure my desktop switcher to the right number of desktops (although the first time I ran it, it crashed on the panel and reloaded...odd, but once it relaunched it seemed fine.) Today I installed updates, rebooted and logged in for the second time since my upgrade. Again, the window manager was dead, and my compiz settings were gone, and the workspaces were set back to four (and when I clicked on the preferences to change them, it crashed on the panel and reloaded again). Resetting everything made things look somewhat normal again. I'm guessing it'll work until I reboot again. Googling around isn't turning up similar complaints about Lucid Lynx and the window manager. Before I go deleting preference files, anyone else know of this kind of issue and what could be done about it? Or should I start taking the stab in the dark approach of deleting preference files hoping one of them is corrupt or has something unsupported in it that's throwing LL for a loop?

    Read the article

  • How do I correctly SSH port forward using LiveReload on Redhat?

    - by program247365
    Referencing this page: http://feedback.livereload.com/knowledgebase/articles/86280-if-you-edit-files-directly-on-your-server It says you can remotely port forward the LiveReload specific port of 35729, using this command: ssh -L 35729:127.0.0.1:35729 mylogin@myremoteserverIP When I run the -v option, I get: debug1: Local connections to LOCALHOST:35729 forwarded to remote address 127.0.0.1:35729 debug1: Local forwarding listening on ::1 port 35729. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 35729. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: client_input_channel_req: channel 2 rtype [email protected] reply 1 debug1: Connection to port 35729 forwarding to 127.0.0.1 port 35729 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 35729 for 127.0.0.1 port 35729, connect from 127.0.0.1 port 63673, nchannels 4 I thought editing my /etc/services with this line, would work, but it doesn't: livereload 35729/tcp # livereload usage with guard-livereload Every time I attempt to connect with the browser extension, I believe It's getting blocked by my server. What am I missing here? Do I need to edit /etc/services for this to work?

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • Access keystore on Sun ONE Webserver 6.1 for 2048 bit key length SSL

    - by George Bailey
    We want to get 2048 bit key length CSR requests. The browser based GUI provides us with a 1024 bit CSR and I don't know how to change that. It seems that 1024 bit key lengths will no longer supported by SSL companies. (Lower cost options only support 2048 bit. Thawte who is much more expensive say they accept 1024 for only one or two year certificates, but not 3). The legacy systems in question are running Sun ONE Webserver 6.1. Upgrading would be time consuming and we would rather not have to do that right now. We will be phasing these out but it will take awhile, so... Got it!! http://middlewarekb.wordpress.com/2010/06/30/how-to-generate-2048-bit-keypair-using-sun-one-or-iplanet-6-1-servers/ It is for the same version webserver I am using. /opt/SUNWwbsvr/bin/https/admin/bin/certutil -R -s "CN=sub.domain.ext,OU=org unit,O=company name,L=city,ST=spelled state,C=US,E=email" -a -k rsa -g 2048 -v 12 -d /opt/SUNWwbsvr/alias -P https-sub.domain.ext-hostname- -Z SHA1 Previous efforts edited out.

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >