Search Results

Search found 19155 results on 767 pages for 'url redirection'.

Page 292/767 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • NMap 6.01

    - by TATWORTH
    NMap 6.01 has been released at http://nmap.org/download.html"Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."Home page is at http://nmap.org/  Nmap is free to download and use. You can download the source and compile it yourself if you so require.

    Read the article

  • No DNS resolving with VPN (RRAS)

    - by Sven
    I have a RRAS server setup on a Windows 2003 machine with two NIC's. The VPN works like a charm, I can ping all the other computers on the network. But it fails when I try to access resources with hostnames. I searches for a solution but the ones I found are about RRAS setup with a remote DHCP server. But in my case it's the RRAS server that hands out the ip addresses (option for redirection for WINS, DNS is ON and set to the LAN NIC). I also heard something about FQDN.. but I don't really understand what that is.

    Read the article

  • Apache Redirect from https to https

    - by Nikolaos Kakouros
    I am trying to redirect without a rewrite rule from eg https://www.domain.com to https://www.domain.net . I have a wildcard certificate for *.domain.net . This yields the following warning in my error_log [warn] RSA server certificate wildcard CommonName (CN) `*.domain.net' does NOT match server name!? This makes sense and I understand why the warning. I would like to ask if there is a way to use the Redirect directive to accomplish the above without the warnings. Here is my virtual hosts in ssl.conf: <VirtualHost *:443> SSLEngine on ServerName www.domain.net DocumentRoot /var/www/html/domain SSLOptions -FakeBasicAuth -ExportCertData +StrictRequire +OptRenegotiate -StdEnvVars SSLStrictSNIVHostCheck off </VirtualHost> <VirtualHost *:443> SSLEngine on ServerName www.domain.com ServerAlias www.domain.info Redirect permanent / https://www.domain.net </VirtualHost> Also, if there is a solution, can it be used for redirection from htps://domain.com to htps://www.domain.com? Thanks a lot!

    Read the article

  • Why would a FaceBook application "work" on a profile, but not a page?

    - by edt
    I made a FaceBook application that works fine on profiles, but I can't figure out how to get it to show on a FaceBook page. For example, after I visit the application canvas URL, allow the application, then edit application settings and "add" to box and tab view... I cannot click the "plus" symbol to the left of the tabs in order to add a tab for the application. It does not appear in the list of available applications. Meanwhile, the application is working/showing up on profiles with no issues. I DID check the "Installable to Pages" checkbox on the application (authentication tab) settings. What could cause this? Here is the application canvas URL: http://apps.facebook.com/russian_girls/

    Read the article

  • How to block spam site republishing my content

    - by Fo.
    I noticed today that Google search results shows some spam copies of one of my sites. The url looks something like this: http://[subdomain].spamsite.com/www.example.com ...where example.com is my site. In my Apache access logs I'm noticing several lines like the following whenever I load the above url: 127.0.0.1 - - [219/Oct/2012:19:27:34 +0000] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The spammer's site shows an exact up to date copy of my site, so I think they are pulling in live data. Any idea how I can block this traffic?

    Read the article

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

  • Configure IIS 6 to deny access to specific file based upon IP address

    - by victorferreira
    Hello guys, We are using IIS 6 as our webserver. And we need to deny the access for one specif file, placed in only one specific URL, to everybody OUTSIDE the local network. In other words, if somebody is trying to access that filme/page from their own computer at home, using the internet, they must not succeed. But, if the same person try to do that at the same network of the web server, its ok. I am not sure about that, but Apache uses ORDER DENY,ALLOW. You specify the URL, allow or deny to all or to a range of IP. Any suggestions? Thanks!

    Read the article

  • Cookie access within a HTTP Class

    - by James Jeffery
    I have a HTTP class that has a Get, and Post, method. It's a simple class I created to encapsulate Post and Get requests so I don't have to repeat the get/post code throughout the application. In C#: class HTTP { private CookieContainer cookieJar; private String userAgent = "..."; public HTTP() { this.cookieJar = new CookieContainer(); } public String get(String url) { // Make get request. Return the JSON } public String post(String url, String postData) { // Make post request. Return the JSON } } I've made the CookieJar a property because I want to preserve the cookie values throughout the session. If the user is logged into Twitter with my application, each request I make (be it get or post) I want to use the cookies so they remain logged in. That's the basics of it anyway. But, I don't want to return a string in all instances. Sometimes I may want the cookie, or a header value, or something else from the request. Ideally I'd like to be able to do this in my code: Cookie cookie = http.get("http://google.com").cookie("g_user"); String g_user = cookie.value; or String source = http.get("http://google.com").body; My question - To do this, would I need to have a Get class, and a Post class, that are included within the HTTP class and are accessible via accessors? Within the Get and Post class I would then have the Cookie method, and the body property, and whatever else is needed. Should I also use an interface, or create a Request class and have Post and Get extend it so that common methods and properties are available to both classes? Or, am I thinking totally wrong?

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • Script to check a shared Exchange calendar and then email detail

    - by SJN
    We're running Server and Exchange 2003 here. There's a shared calendar which HR keep up-to-date detailing staff who are on leave. I'm looking for a VB Script (or alternate) which will extract the "appointment" titles of each item for the current day and then email the detail to a mail group, in doing so notifying the group with regard to which staff are on leave for the day. The resulting email body should be: Staff on leave today: Mike Davis James Stead @Paul Robichaux - ADO is the way I went for this in the end, here are the key component for those interested: Dim Rs, Conn, Url, Username, Password, Recipient Set Rs = CreateObject("ADODB.Recordset") Set Conn = CreateObject("ADODB.Connection") 'Configurable variables Username = "Domain\username" ' AD domain\username Password = "password" ' AD password Url = "file://./backofficestorage/domain.com/MBX/username/Calendar" 'path to user's mailbox and folder Recipient = "[email protected]" Conn.Provider = "ExOLEDB.DataSource" Conn.Open Url, Username, Password Set Rs.ActiveConnection = Conn Rs.Source = "SELECT ""DAV:href"", " & _ " ""urn:schemas:httpmail:subject"", " & _ " ""urn:schemas:calendar:dtstart"", " & _ " ""urn:schemas:calendar:dtend"" " & _ "FROM scope('shallow traversal of """"') " Rs.Open Rs.MoveFirst strOutput = "" Do Until Rs.EOF If DateDiff("s", Rs.Fields("urn:schemas:calendar:dtstart"), date) >= 0 And DateDiff("s", Rs.Fields("urn:schemas:calendar:dtend"), date) < 0 Then strOutput = strOutput & "<p><font size='2' color='black' face='verdana'><b>" & Rs.Fields("urn:schemas:httpmail:subject") & "</b><br />" & vbCrLf strOutput = strOutput & "<b>From: </b>" & Rs.Fields("urn:schemas:calendar:dtstart") & vbCrLf strOutput = strOutput & "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<b>To: </b>" & Rs.Fields("urn:schemas:calendar:dtend") & "<br /><br />" & vbCrLf End If Rs.MoveNext Loop Conn.Close Set Conn = Nothing Set Rec = Nothing After that, you can do what you like with srtOutput, I happened to use CDO to send an email: Set objMessage = CreateObject("CDO.Message") objMessage.Subject = "Subject" objMessage.From = "[email protected]" objMessage.To = Recipient objMessage.HTMLBody = strOutput objMessage.Send S

    Read the article

  • How to Access an AWS Instance with RDC when behind a Private Subnet of a VPC

    - by dalej
    We are implementing a typical Amazon VPC with Public and Private Address - with all servers running the Windows platform. The MS SQL instances will be on the private subnet with all IIS/web servers on the public subnet. We have followed the detailed instructions at Scenario 2: VPC with Public and Private Subnets and everything works properly - until the point where you want to set up a Remote Desktop Connection into the SQL server(s) on the private subnet. At this point, the instructions assume you are accessing a server on the public subnet and it is not clear what is required to RDC to a server on a private subnet. It would make sense that some sort of port redirection is necessary - perhaps accessing the EIP of the Nat instance to hit a particular SQL server? Or perhaps use an Elastic Load Balancer (even though this is really for http protocols)? But it is not obvious what additional setup is required for such a Remote Desktop Connection?

    Read the article

  • wildcard in httpd conf file?

    - by Joe
    Here is an example httpd config I'm currently using: <VirtualHost 123.123.123.123:80> ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /home/folder </VirtualHost> I'm wondering, is it possible to have a wildcard for the ServerName & ServerAlias variable? Reason for asking is I have some software that is shared among multiple URL's all controlled in a CMS and it's kind of a pain to add new domains via ssh everytimee. And before someone points out a security hole, the software does check the current URL before doing any webpages :)

    Read the article

  • Configure IIS 6 to deny access to specific file based upon IP address

    - by victorferreira
    Hello guys, We are using IIS 6 as our webserver. And we need to deny the access for one specif file, placed in only one specific URL, to everybody OUTSIDE the local network. In other words, if somebody is trying to access that filme/page from their own computer at home, using the internet, they must not succeed. But, if the same person try to do that at the same network of the web server, its ok. I am not sure about that, but Apache uses ORDER DENY,ALLOW. You specify the URL, allow or deny to all or to a range of IP. Any suggestions? Thanks!

    Read the article

  • virtualbox and nginx server_name

    - by Ivan
    I'm trying to configure gitlab running in an Ubuntu 12.04 guest with Windows7 host. I can ssh the guest using port-forwarding and access the nginx server using port redirection (8888 in host is 80 in guest, so localhost:8888 in host gets to the nginx server in the guest), but the server_name in nginx configuration file is giving me trouble. What is the correct listen and server_name that nginx would accept? The guest has the NAT interface at 10.0.2.15 and Host-Only interface at 192.168.56.101, static. Thanks!

    Read the article

  • iis php internal server error

    - by user1633206
    I developed a website, using php/mysql running at IIS Server as CGI Server API. Suddenly it gives me error 500 after two weeks. It has lot of scripts but index.php home is working. But other script that has header redirection What's wrong with my scripts. livehttp addon of firefox says.. GET /allplans.php?lang=ar&cat=1 HTTP/1.1 Host: www.myhost.com... [edited] User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cache-Control: max-age=0 HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 03 Sep 2012 08:09:13 GMT Content-Length: 1208 Connection: Keep-Alive

    Read the article

  • Is there some standard way to publish a calendar feed?

    - by CMP
    For a web application I am working on, I would like to be able to give the user a single url that they can enter into the calendar application of their choosing to have events from our application show up in their calendar. Most other sites I have seen that do similar things will have a one time download of an .ics file that can be imported. If I have to require my users to download a new file every time the schedule changes, it sort of defeats the purpose of having the feed at all. The calendar can change many many times a day. What I would really like is something like rss where their calendar program can look up a url and automatically see the most recent data. Does anything like this exist? Our main target is mobile devices, so it really should be supported by iCal and google Calendar. Anything else is bonus.

    Read the article

  • Two SSL certs for a domain in DirectAdmin

    - by Bart van Heukelom
    If I were to get 2 SSL certificates, one for example.com and one for www.example.com, is there a way to install them both on the site example.com in DirectAdmin? The default interface only allows installing one for both versions. If not, can I separate the 2 domains into 2 sites? One of them would only be a redirection, so there wouldn't be any duplication of site files. (Please don't answer with "one certificate should work for both". It doesn't always. This is a DirectAdmin question)

    Read the article

  • Want Google to index redirect urls

    - by Dave Goten
    I'm having issues with users who think that Google Search is the address bar. Some of the sites that link to my site use user friendly addresses with 301 redirects to pages that have less friendly URLs. So, for example if I enter www.foo.com/bar it goes to www.bar.com/page.php?some-parameters-and-utm-codes-etc usually this is done by a 301 redirect in order to keep the SEO from foo.com on bar.com and so on, which I believe is standard practice. However, lately there have been more and more people searching www.foo.com/bar instead of going to www.foo.com/bar directly and because the page /bar is nothing more than a redirect it has no SEO that I know of. Things I've thought of but haven't been able to test, because Google takes forever to update :) (and I'm lazy like that), include using Google sitemaps and having them enter their redirects as entries there. (I could see this working if they were the top search entry all the time, and it might appear as a sitelink, but I don't know if that'll make the url itself show up in searches) Using Canonical tags on my pages to the redirects they set up. Which is a nightmare in itself because of the nature of my pages. One week the www.foo.com/bar might go to www.bar.com/pageA.php the next it might goto www.bar.com/pageB.php and having to remember to take the canonical tag off of pageA, so that it doesn't get confused with pageB would be a pain. Using 302 redirects -.- So I guess the question here is, does anyone have any experience or knowledge about this? What should I do to make www.foo.com/bar show up when someone 'searches' for this redirect url?

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • WCF REST Error Handler

    - by Elton Stoneman
    I’ve put up on GitHub a sample WCF error handler for REST services, which returns proper HTTP status codes in response to service errors.   The code is very simple – a ServiceBehavior implementation which can be specified in config to tag the RestErrorHandler to a service. Any uncaught exceptions will be routed to the error handler, which sets the HTTP status code and description in the response, based on the type of exception.   The sample defines a ClientException which can be thrown in code to indicate a problem with the client’s request, and the response will be a status 400 with a friendly error message:       throw new ClientException("Invalid userId. Must be provided as a positive integer");   - responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/lastLogin?userId=xyz   Error Status Code: 400, Description: Invalid userId. Must be provided as a positive integer   Any other uncaught exceptions are hidden from the client. The full details are logged with a GUID to identify the error, and the response to the client is a status 500 with a generic message giving them the GUID to follow up on:       var iUserId = 0;     var dbz = 1 / iUserId;   - logs the divide-by-zero error and responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/dbz     Error Status Code: 500, Description: Something has gone wrong. Please contact our support team with helpdesk ID: C9C5A968-4AEA-48C7-B90A-DEC986F80DA5   The sample demonstrates two techniques for building the response. For client exceptions, a friendly HTML response is sent in the body as well as the status code and description. Personally I prefer not to do that – it doesn’t make sense to get a 400 error and find text/html when you’re expecting application/json, but it’s easy to do if that’s the functionality you want. The other option is to send an empty response, which the sample does with server exceptions.   The obvious extension is to have multiple exceptions representing all the status codes you want to provide, then your code is as simple as throwing the relevant exception – UnauthorizedException, ForbiddenExeption, NotImplementedException etc – anywhere in the stack, and it will be handled nicely.

    Read the article

  • Nginx location to match query parameters

    - by Dave
    Is it possible in nginx to have a location {} block that matches query parameters. For example I want to pick up that "preview=true" in this url and then instruct it to do several different things, all possible in a location block. http://192.158.0.1/web/test.php?hello=test&preview=true&another=var The problem I'm having is that my test stuff doesn't seem to match, it seems like I can only match the URL itself? E.g. location ~ ^(.*)(preview)(.*)$ Or something aloong those lines?

    Read the article

  • How to quickly save what is currently shown in cmd.exe to a file

    - by Zeiga
    I am asking if there is a quick way/command to save the current standard output from cmd.exe or powershell to a file. For example, I have run a bunch of commands in cmd.exe which generating like hundreds of lines of standard output. Ideally, I am looking for a single command to do "select all" and save to a file automatically. Note: I've read this. But I don't want to change my original commands, so "" or "" redirection cannot be used in this scenario. Thanks.

    Read the article

  • What's the easiest way to create an HTTP proxy which adds basic authentication to requests?

    - by joshdoe
    I am trying to use a service provided by a server which requires basic HTTP authentication, however the application I am using does not support authentication. What I'd like to do is create a proxy that will enable my auth-less application to connect via the proxy (which will add the authentication information) to the server requiring authentication. I'm sure this can be done, however I'm overwhelmed with the number of proxies out there and couldn't find an answer how to do this. Basically it seems all I want to do is have a proxy serve this URL: http://username:password@remoteserver/path as this URL: http://proxyserver/path I can run it on Linux, but a plus if I can run it Windows as well. Open source or at least free is a must. A big plus is if it's fairly straightforward to setup.

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >