Search Results

Search found 23346 results on 934 pages for 'clean url'.

Page 312/934 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Reverse proxy with SSL and IP passthrough?

    - by Paul
    Turns out that the IP of a much-needed new website is blocked from inside our organization's network for reasons that will take weeks to fix. In the meantime, could we set up a reverse proxy on an Internet-based server which will forward SSL traffic and perhaps client IPs to the external site? Load will be light. No need to terminate SSL on the proxy. We may be able to poison DNS so original URL can work. How do I learn if I need URL rewriting? Squid/apache/nginx/something else? Setup would be fastest on Win 2000, but other OSes are OK if that would help. Simple and quick are good since it's a temporary solution. Thanks for your thoughts!

    Read the article

  • How to trigger a check for updates in Firefox programatically or from a command line?

    - by Triynko
    Is there a command line switch for firefox.exe or an "about:" URL that will either force an update check or at least display the Help/About dialog, which checks for updates and tells if you're running the latest version? One site claimed that the "about:" URL was the same as menu Help - About, but it's not. I built a program to automate the updating of various programs on my machine, and most programs have command line tools for checking for updates. Windows update has wuauclt.exe, Java has jucheck.exe. For some applications, I can even automate the interface, but it's difficult in Firefox, because the main window title is unpredictable (it depends on which web page is active), and all Firefox windows seem to use the exact same window class name.

    Read the article

  • Lighttpd domain redirection

    - by HTF
    I would like to redirect domains on HTTP/HTTPS: http://old.com -> https://new.com https://old.com -> https://new.com I have to specify the SSL key/certificate for the old domain but I'm not sure where I have to place these directives: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/new.com.pem" ssl.ca-file = "/etc/pki/tls/certs/new.com.crt" } $SERVER["socket"] == ":80" { $HTTP["host"] =~ "old.com|new.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } } I was trying to add the code below but Lighttpd reports configuration errors: $SERVER["socket"] == ":443" { $HTTP["host"] =~ "old.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/old.com.pem" ssl.ca-file = "/etc/pki/tls/certs/old.com.crt" }

    Read the article

  • is it okay to use random URLs instead of passwords?

    - by stew
    Is it considered "safe" to use URL constructed from random characters like this? http://example.com/EU3uc654/Photos I'd like to put some files/picture galleries on a webserver that are only to be accessed by a small group of users. My main concern is that the files should not get picked up by search-engines or curious power-users that poke around my site. I've set up an .htaccess file, just to notice that clicking on http://user:pass@url/ links doesn't work well with some browsers/email clients, prompting dialogs and warnings messages that confuse my not-too-computer-savy users.

    Read the article

  • Ubuntu 12.04 issue installing Webmin modules (file does not exist)

    - by Arthur
    I'm running ubuntu 12.04. On this machine I have Webmin 1.580 installed. When trying to add a new module to Webmin I receive an error: "Failed to install module from : File does not exist" This happens when i go to Webmin Configuration Webmin Modules Select "Third party module from" Select "OpenVPN admin tool" and clicking on install module After I selected "OpenVPN admin tool" this url is being added: http://www.openit.it/downloads/OpenVPNadmin/openvpn-2.5.wbm.gz This URL resolves to a correct file. When uploading the file directly, the same error occurs. In the log file: /var/webmin/webmin.log nothing special is reported. How can I fix this issue? P.s. I run this server in CLI mode

    Read the article

  • Can too many 301 redirects cause a DNS error?

    - by Graham
    For a site http://imageocd.com that I just set up I initially spelled the category "automobiles" as "autimobiles"... I know it's rediculous. I then set up over 10,000 pages behind that category e.g. http://imageocd.com/automobiles/hillman-minx-cabrio-pictures-and-wallpapers. So, I set up over 10,000 301 url redirects to change the spelling on automobiles. I just checked my Google Webmasters report and got an error saying: http://www.imageocd.com/: Googlebot can't access your siteSep 7, 2012 Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 66.7%. Could the overabundance of 301 redirects be causing this? I host 13 sites on this dedicated server and all sites are running fine. I also contacted GoDaddy and they said the server is running fine. Any ideas on what might be going on? Also, I have "canonical" set up for every URL. Could this be part of the error? Thanks.

    Read the article

  • How to update debian dns server? New VM with same hostname as old VM

    - by opensourcechris
    We run several linux VM's on our Hyper-V cluster. Our old IT manager configured the dns server to resolve the url 'devlabs.ourdomain.com' to a debian squeeze apache webserver hosted on the hyper v cluster with the hostname: devlabs. We recently created a new Ubuntu vm to replace the original squeeze vm. When we created the new Ubuntu VM we used the same hostname of 'devlabs" to name the new VM. My problem is that now I am only able to access the new Ubuntu VM by using the IP address. How can I update our DNS server to point the url 'devlabs.ourdomain.com' to the new VM?

    Read the article

  • reverse_proxy (mod_rewrite) and rails

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Fully secured gateway web sites

    - by SeaShore
    Hello, Are there any web sites that serve as gateways for fully encrypted communication? I mean sites with which I can open a secured session, and then to exchange through them with other sites in a secure way both URLs and content? Thanks in advance. UPDATE Sorry for not being clear. I was wondering if there was a way to access any site over the Internet (http or https) without letting any Intranet-proxy read the requested URL or the received content. My question is whether such a site exists, e.g.: I am connected to that site via https, I send it a URL in a secured way, the site gets the content from the target site (possibly in a non-secured way) and returns to me the requested content in a secured way.

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • strange 404 errors

    - by user1400532
    i have this website thinkmovie.in recently i enabled cloudfare along with maxcdn. When i look at my server logs, i see these strange 404 errors for many of the files. for eg: http://thinkmovie.in/img/content/15062012faith/thumbs/model_fai12e8th_latest_photoshoot_10.jpg But the actual url is http://thinkmovie.in/img/content/15062012faith/thumbs/model_faith_latest_photoshoot_10.jpg refer_url: http://www.thinkmovie.in/gallery/ It means the term "model_faith" is replaced by "model_fai12e8th" and one more http://thinkmovie.in/image.php/?offset=1&height=120&width=144&cropratio=1.2:1%E2%84%91=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg?offset=1&height=120&width=144&cropratio=1.2:1%E2%84%91=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg actual url http://thinkmovie.finalytics.in/image.php/?offset=1&height=120&width=144&cropratio=1.2:1%E2%84%91=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg?offset=1&height=120&width=144&cropratio=1.2:1&image=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg refer_url: http://www.thinkmovie.in/gallery/hotactress/album/pranitha_hot_stills_19012012pranitha/ {&image replaced by %E2%84%91} I'm not able to understand how this is happening. I checked my code server times. And I am not able to replicate this problem from my browser. Please help me.

    Read the article

  • How to enable the user to add background images to anchor links thought Wordpress admin panel? [closed]

    - by janoChen
    I have css selectors like this on in my style.css: .jimgMenu ul li.landscapes a { background: url(../images/landscapes.jpg) repeat scroll 0%; } What's the easiest way to enable the user to add background images to anchor links like the ones below? front-page.php: <div class="jimgMenu"> <ul> <li class="landscapes"><a href="#nogo">Landscapes</a></li> <li class="people"><a href="#nogo">People</a></li> <li class="nature"><a href="#nogo">Nature</a></li> <li class="abstract"><a href="#nogo">Abstract</a></li> <li class="urban"><a href="#nogo">Urban</a></li> <li class="people2"><a href="#nogo">People</a></li> </ul> </div> To illustrate: .jimgMenu ul li.landscapes a { background: url(<add background image>) repeat scroll 0%; } What that code would look like?

    Read the article

  • How to emulate a domain name - Webmin Setup

    - by theonlylos
    I am currently working on a client project where they are using a custom CMS which relies on having the specific domains configured for it to work properly. So in English, that means that when I try running the site on my test environment, the entire website fails because it isn't located on the primary domain (and I'm pretty sure the domain is hard coded since there's no control panel to adjust the file locations). Anyway what I wanted to ask is whether it is possible to use my test environment URL but have Apache and the DNS emulate my clients website URL locally, rather than calling the actual name servers. Right now I have a virtual host setup in Apache but I am not sure where to go from there. Any assistance is greatly appreciated.

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • DNS problems : correct nameserver, namserver working, but not resolving

    - by user1719624
    My problem is as follows. Any suggestions are welcome. [domain].org is not resolving whois and checking the registry information shows that the correct nameserver is set. The primary nameserver is also the server on which domain.org is hosted. The primary nameserver is also used for a number of other domains, and is working fine for those. Logging into the server, I can ping [domain].org and it resolves correctly. Setting the nameserver as my own DNS server on my laptop, and the URL resolves correctly. If the domain has the correct nameserver set, and the nameserver can resolve the URL to the correct IP address, and if I use the nameserver as my DNS then it resolves correctly, AND the nameserver is used for other domains which are resolving correctly, then why isn't it working? NB : this is a new domain registration and has been set up for around 10 days now, so it's not simple slow propagation. Any ideas? thanks

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • How to set up Apache 2 to serve only subdirectories

    - by Lynden Shields
    I have 3 sites which need to be hosted on a web server (apache2 from repo running on Ubuntu 12.04). They are each in their own subdirectory within /var/www/ I would like apache to serve files from the relevant directories only if the directory name is given in the URL, but not serve the /var/www/ directory itself. E.g: http://1.2.3.4/site1/ should work and serve the index from /var/www/site1/index.html, but http://1.2.3.4/ should not serve anything. Currently, I can't get the url to point to the directory. Either I can get http://1.2.3.4/ to serve everything within /var/www/ (including /var/www/site2/secretstuff/), or I can get the root http://1.2.3.4/ to serve one of the subdirectories (/var/www/site1/). This is unacceptable site 1 needs Indexes enabled but the others must not. I just want to make site1's config only respond to requests of the form http://1.2.3.4/site1/* and not handle requests of the form http://1.2.3.4/ I do not have a domain name set up so I can't use subdomains.

    Read the article

  • CURL -I issue stop responding when contain "="

    - by user1512778
    i did this command : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm&requestingHandler=WebNSORDetailHandler&ID=368343543' but stuck but if i did this : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:49:14 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 then i try shorten it : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm' stuck too i dont know why seems like if the url contain "=" it stop responding so tried this url removing the "=" (serviceName=WebNSOR to serviceNameWebNSOR) : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceNameWebNSOR' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:50:38 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 why i cant use = ? please assist me

    Read the article

  • Joomla in Windows is catching my access to a Virtual Directory where I placed my MVC application

    - by Romias
    In our windows hosting we use the root (wwwroot) folder to host a JOOMLA website as public website. This is running IIS 7. Then, we created a virtual directory called "App" to host there a ASP.NET MVC4 application. When I enter www.mydomain.com it shows the joomla website correctly. When I enter www.mydomain.com/App/ it somehow access my MVC app... as I see the URL changing to www.mydomain.com/App/Account/LogOn?ReturnUrl=%2fApp%2f BUT shows a 404 Joomla error as if it were looking that URL in Joomla. BTW, the hosting has 2 ASP.NET IIS Setup options: 4.0 Classic and 4.0 integrated. Using the Integrated one... it displays a blank page... using the classic one shows the 404 Joomla page. Any idea where to look for this?

    Read the article

  • htaccess rewrite domain/folder to domain/

    - by user1678259
    I've been checking all questions previously made here, but can't find a solution. We have a website nearly finished www.example.com/web/ and would like to hide the folder /web/ from the url. So what is shown is: www.example.com/web/* And would like: www.example.com/* If we try with a redirect, the url www.example.com goes to www.example.com/web/. And this is what we don't want. Any help will be very appreciated. Thank you all.

    Read the article

  • Is there any kind of established architecture for browser based games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that? EDIT: After reading the first answers I am under the impression of having made a mistake when including the "MMO" part into the title. The game will not be all fancy (i.e. 3D or such) on the client side and the logic will completely exist on the server. That is, apart from basic form validation for the user which will also be mirrored on the server side. So the target toolset will be HTML5, JavaScript, probably JQuery(UI). My question is more related to the software architecture/design of a system that enforces certain rules. Separation of ruleset and presentation One problem I am having is that I want to separate the game rules from the presentation. The first step would be to make an own module for the game "engine" that only exposes an interface that allows all actions to be taken in a clean way. If an action fails with regard to some pre/post condition, the engine throws an exception which is then presented to the user like "you cannot sell something you do not own" or "after that you would end up in a situation which is not a valid game state." The problem here is that I would like to be able to not even present invalid action in the first place or grey out the corresponding UI elements. Changing and tweaking the ruleset Another big thing is the ruleset. It will probably evolve over time and most definitely must be tweaked. What's more, it should be possible (to a certain extent) to build a ruleset that fits a specific game round, i.e. choosing different kinds of behaviours in different aspects of the game. This would do something like "we play it with extension A today but we throw out extension B." For me, this screams "Architectural/Design pattern" but I have no idea on who might have published on something like this, not even what to google for.

    Read the article

  • Apache: Stealth 404 the admin area until authenticated via basic auth, then allow access

    - by Kzqai
    Given a administrative area with urls like this: wp-admin/ wp-admin/whatever wp-admin/another-page wp-adminsecretlogin/ A standard basic-auth coverage would provide a username and password prompt on all three urls, and return a 403 on all failed auth attempts. This is a pretty obvious signal that something exists there, and thus is an invitation to script/brute force access. I would like to instead, require basic auth everywhere, but when not authenticated, not prompt for username and password, and instead return a 404 not found error for all urls except a wp-adminsecretlogin/ url. At that individual-to-the-site url, basic auth could go through, and unlock the rest of the administrative functionality (though the standard application login would still be necessary). How would I do that via apache .htaccess or .conf directives?

    Read the article

  • htaccess rewriterule leading slash

    - by Tiddo
    I'm using htaccess to rewrite my urls so that I can have nice clean urls. However, the same htaccess file does different things on my local server and my remote server: On my local server the url to the website is like http://localhost/example/ and on my remote server the url is http://example.com/. For my local server I can use the following htaccess redirect rule: RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] However, when I use this on my remote server I get an internal server error. Instead I have to use this: (note the leading slash) RewriteRule ^(.*)$ /index.php?page=$1 [L,QSA] Unfortunately this doesn't work on my local server: this rewrite rule requests http://localhost/index.php instead of http://localhost/example/index.php on my local server. How can I make this work on both my remote and local server?

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >