Search Results

Search found 23346 results on 934 pages for 'clean url'.

Page 311/934 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • How to get the IP Address for your Local Area Connection on Windows Server?

    - by Geo
    I want to create a batch or vbs file that will put together a url and executed. Part of that url needs to be the actual ip address of the machine. How I am able to get that IP address in a variable to include it on the script? EDIT 1: I found out that the command below will give me the IP Address, but still don't know how to get that value into a variable to use it in a script. c:\> wmic NICCONFIG WHERE IPEnabled=true GET IPAddress /format:csv Node,IPAddress IP-0AFB,{10.25.5.2}

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • 500 Internal Server Error after moving Joomla installation to new environment

    - by rad
    (This is the first time I moved the website so please don't be hard on me.) After moving the website, the homepage shows up properly but other pages do not. I get 500 Internal Server Error on all other pages. Before moving, the Search Engine Friendly URLs and Use URL rewriting were enabled in the Joomla Dashboard. Is this the reason the other pages are not showing up? If so, how do I fix this? I think the homepage shows up because the url myWebsite.com redirects to myWebsite.com/index.php automatically. Note that I have transferred all of the Joomla the files through Filezilla and imported the MySQL database properly and also edited the configuration.php as set the proper settings for the database.

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Accessing server by dedicated IP address

    - by Sherwin Flight
    I'm having an issue with my hosting provider after migrating to a new account. It's taking some time to get the problem sorted out, so I am hoping someone here can shed some light on the situation. The server is running WHM/cPanel, and the site I am trying to access has a dedicated IP address. When I connect to the server like this http://IP.HERE instead of showing my the website the way I would expect, it is showing the contents of a subfolder. So, while I would expect it to load public_html/ it is loading public_html/somefolder/ instead. Any idea why this is happening instead of showing the sites homepage the way I would expect? EDIT It is not redirecting, so the url is just http://IP.ADDRESS/, but the files listed are from a subfolder. So, it LOOKS at though I went to http://IP.ADDRESS/subfolder, when the URL says it should be showing the main folder contents. When I access the site using the domain name, it works properly, so I assume the document root is set correctly.

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • IIS - Script for repeated hacks on a website

    - by dodegaard
    I currently have a site that is armored by ELMAH as its reporting mechanism. Each time someone hits a URL that is incorrect it notifies me or logs to the system. This is annoying for someone fat-fingering the URL with a misspelling but great when a hacker is trying to crack a site of mine. Has anyone ever written a script for IIS 7 on Win 2K8 that blocks an IP based on repeated attempts to hit a website? I've looked at Snort and other IDS systems but if I could get a script that could be linked to my ELMAH system it might be the perfect thing. PowerScript, etc. is what I was thinking. Hints and recommendations are wonderful and if you think a true intrusion detection system is recommended give me your ideas. Thanks in advance.

    Read the article

  • would it be bad to put <span> tags within the <head>, for grouping meta data in schema.org format?

    - by hdavis84
    Alright, I'm currently practicing schema.org microdata, and trying to find the best route for every site I build. I have found that i can piggyback itemprops on open graph meta tags. I would like to piggyback more itemprops on opengraph meta tags. However, schema.org requires you to change itemtypes to define all aspects of a "thing". Say I'm defining a LocalBusiness. Open graph has street address, locality, and region i'd like to piggyback on. I'd have to do something like: <html lang="en" itemscope itemtype="http://schema.org/LocalBusiness"> <head> ... <meta itemprop="name" content="Business Name" /> <meta property="og:url" itemprop="url" content="http://example.com" /> <meta property="og:image" itemprop="image" content="http://example.com/logo.png" /> <span itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <meta property="og:street-address" itemprop="streetAddress" content="1234 Amazing Rd." /> <meta property="og:locality" itemprop="addressLocality" content="Greenfield" /> <meta property="og:region" itemprop="addressRegion" content="IN" /> </span> </head> Although there's more that can be added in, this is enough of an example to show what I'm trying to achieve. I've searched the web to see if it is an issue to use spans in the head or not, because I don't want invalid markup. I know I can mark up the address information in the body of the pages, but the route above would be more efficient. Does anyone have an answer for this?

    Read the article

  • OpenWeb(String) method

    - by ybbest
    I guess this is a SharePoint beginner problem ,however it took me a while to figure out what the problem is and I will blog it to help me to remember. Basically I wrote the following code to grab some list item from my SharePoint subsite http://win-oirj50igics/RestAPI,however I got the error stating that : “<nativehr>0×80070002</nativehr><nativestack></nativestack>There is no Web named / http://win-oirj50igics/RestAPI”. The problem is that OpenWeb(String) method returns the web site that is located at the specified server-relative or site-relative URL. It is the relative URL , so after I changed http://win-oirj50igics/RestAPI to RestAPI, everything works fine. using (SPSite site = new SPSite(http://win-oirj50igics/)) { SPWeb web = site.OpenWeb("http://win-oirj50igics/RestAPI"); SPQuery query = new SPQuery(); query.Query = camlDocument.InnerXml; SPListItemCollection items = web.Lists["Songs"].GetItems(query); IEnumerable<Song> sortedItems = from item in items.OfType<SPListItem>() orderby item.Title select new Song {SongName = item.Title, SongID = item.ID}; songs.AddRange(sortedItems); }

    Read the article

  • WordPress mod_rewrite redirect specific folders

    - by Ps Cjef
    As a new user, I'm not allowed to post more than two hyperlinks here. So I have added a space after every http (ignore them and read as full URLs). System: Debian Etch, Apache 2.2 I have a WordPress instance with multiple blogs. I would like to redirect some of the folders based on the year and month, while leaving other folders go to the actual locations. Example: I have archives for a few years, like 2010, 2011 and 2012: http ://mydomain.com/wordpress/myblog/2010/02 http ://mydomain.com/wordpress/myblog/2011/01 http ://mydomain.com/wordpress/myblog/2012/01 I would like to redirect all 2010 and 2011 posts to another blog with the same folder structure: http ://mydomain.com/wordpress/myotherblog/2010/02 http ://mydomain.com/wordpress/myotherblog/2011/01 and so on. I would like to have 2012 and beyond to go to the actual site (http ://mydomain.com/wordpress/myblog/2012/01). I tried mod_rewrite with the following, one rule at a time to test redirection for just one year (and to expand later for other years), and none of them worked! * RewriteEngine is already on since there are some default WordPress rewrites. * RewriteBase is set to http://mydomain.com/wordpress/ . * I put my rule before all the other default WordPress rules are processed. Didn't work solution #1 RedirectMatch 301 /myblog/2010/(.*) /myotherblog/2010/$1 Didn't work solution #2 RewriteRule /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 [R=301] Didn't work solution #3 RedirectPermanent /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 I've also tried the above rules with and without a fully qualified URL for the new location. The rewrite log, with log level set to 9, did not provide any useful information. It shows that it looks at the pattern specified against the URL (as mentioned in the rule), but finally what happens is a passthrough to http ://mydomain.com/myblog/ for all URLs or a 500 Internal Server Error. Any ideas on where I could be going wrong or any alternative solutions?

    Read the article

  • getaddrinfo(3) failed

    - by user101289
    I'm trying to connect to a webservice using a PHP wrapper (which is using curl under the covers). On my local linux machine running PHP 5.3 it works perfectly. However, when I move to a remote server (also running PHP 5.3 on Linux) the call the the webservice URL returns: getaddrinfo(3) failed for http://server.host.com:8080/login I get a similar error from a ping on the remote host: ping: unknown host http://server.host.com:8080/login But when I issue a curl request from the command line, it returns the expected URL. Can anyone shed any light on this issue? Thanks!

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • How to reset Chrome's search engines to default?

    - by AndreKR
    I accidentally deleted Google as the default search engine from Chrome. This also caused the "Search Google for this image" item in the context menu of images to disappear. I tried to add it back by adding a search engine with these settings, which I copied from another machine: Name: Google Keyword: google.com URL: {google:baseURL}search?q=%s&{google:RLZ}{google:originalQueryForSuggestion}{google:assistedQueryStats}{google:searchFieldtrialParameter}{google:searchClient}{google:sourceId}{google:instantExtendedEnabledParameter}{google:omniboxStartMarginParameter}ie={inputEncoding} Unfortunately this does not bring back the "Search Google for this image" menu item, so there must be more to this entry than just Name, Keyword and URL. I don't mind deleting all search engines and resetting the list to its default state, but how can I do this?

    Read the article

  • "Press Tab to search <site>" in Chrome not working

    - by YatharthROCK
    The problem After partially entering a URL like meta.st, Chrome adds in the rest of the URL slected in blue and says "Press Tab to search Meta Stack Overflow". But on pressing, it just moves the cursor to the next item like in a normal text-field. Other info I'm on Windows 7 Home Basic. My Chrome version number is 22.0.1229.52 beta-m (beta channel). This was working before. What I've tried I've reported it in Chrome (by going to Options Tools Report an issue...). I've also tried deleting all my custom search engines (they might've been interfering) and also tried creating a new profile. I also googled, but to no avail. How do I fix this? Thanks.

    Read the article

  • How to trigger a check for updates in Firefox programatically or from a command line?

    - by Triynko
    Is there a command line switch for firefox.exe or an "about:" URL that will either force an update check or at least display the Help/About dialog, which checks for updates and tells if you're running the latest version? One site claimed that the "about:" URL was the same as menu Help - About, but it's not. I built a program to automate the updating of various programs on my machine, and most programs have command line tools for checking for updates. Windows update has wuauclt.exe, Java has jucheck.exe. For some applications, I can even automate the interface, but it's difficult in Firefox, because the main window title is unpredictable (it depends on which web page is active), and all Firefox windows seem to use the exact same window class name.

    Read the article

  • SEO, IIS 7 and web.config in subfolder issue

    - by tesicg
    We have ASP.NET application that has sub-folder with .aspx pages and separate web.config file in it. The .aspx pages in that sub-folder behave as separate site. In the web.config file at application level, I set the rule that removing trailing slashes: <rewrite> <rules> <rule name="RemoveTrailingSlashRule1" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> </rules> </rewrite> I expect this rule will propagate downward to sub-folder as well. To access the site in sub-folder we should type: http://concert.local/elki/ and get it without trailing slash as: http://concert.local/elki But, the trailing slash remains. The web.config file in sub-folder looks as following: <configuration> <system.webServer> <defaultDocument> <files> <add value="Sections.aspx" /> </files> </defaultDocument> </system.webServer> </configuration>

    Read the article

  • Reverse proxy with SSL and IP passthrough?

    - by Paul
    Turns out that the IP of a much-needed new website is blocked from inside our organization's network for reasons that will take weeks to fix. In the meantime, could we set up a reverse proxy on an Internet-based server which will forward SSL traffic and perhaps client IPs to the external site? Load will be light. No need to terminate SSL on the proxy. We may be able to poison DNS so original URL can work. How do I learn if I need URL rewriting? Squid/apache/nginx/something else? Setup would be fastest on Win 2000, but other OSes are OK if that would help. Simple and quick are good since it's a temporary solution. Thanks for your thoughts!

    Read the article

  • Optimising news fetching

    - by aceBox
    I have a web scraper for scraping news from different sources in wp7. My current appraoch for doing this is: load newspapers information from xml file. go to the specified sections and fetch the urls of the news items. go to each url and fetch headline, image, publisher. display using a MVVM architecture of windows phone. The whole thing takes place asynchronously...meaning as soon as url from a section of a newspaper is fetched it is added to the queue, and the second stage consisting of fetching headline, image etc starts... and as soon this is fetched even for one article, it is displayed. Later on as more articles are fetched, they are added on to the list. For the fetching purpose I am using a SmartThreadPool(http://www.codeproject.com/Articles/7933/Smart-Thread-Pool) for windows phone. My problem is that...even for fetching around 80 items (in total) from 9 publications, it is taking more than a minute. How can i speed up the procedure? Note: I have a two stage approach because many times the images are not available with headlines, and are only found in the article.

    Read the article

  • is it okay to use random URLs instead of passwords?

    - by stew
    Is it considered "safe" to use URL constructed from random characters like this? http://example.com/EU3uc654/Photos I'd like to put some files/picture galleries on a webserver that are only to be accessed by a small group of users. My main concern is that the files should not get picked up by search-engines or curious power-users that poke around my site. I've set up an .htaccess file, just to notice that clicking on http://user:pass@url/ links doesn't work well with some browsers/email clients, prompting dialogs and warnings messages that confuse my not-too-computer-savy users.

    Read the article

  • reverse_proxy (mod_rewrite) and rails

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

    - by ShawnBailey
    When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example. What is Included: wlst.properties: Ant Properties build.xml soa_diagnostic_script.py: Python Script What is Collected: 5 contextual thread dumps at 5 second intervals Diagnostic log entries from the server WLS Image which includes the domain configuration and WLS runtime data Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS Instructions: Download the package and extract it to a location of your choosing Update the properties file 'wlst.properties' to match your environment Run 'ant' (must be on the path) Collect the zip package containing the files (by default it will be in the script.output location) Properties Reference: oracle_common.common.bin: Location of oracle_common/common/bin script.home: Location where you extracted the script and supporting files script.output: Location where you want the collections written username: User name for server connection pwd: Password to connect to the server url: T3 URL for server connection, '<host>:<port>' dump_interval: Interval in seconds between thread dumps log_interval: Duration in minutes that you want to go back for diagnostic log information Script Package

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Lighttpd domain redirection

    - by HTF
    I would like to redirect domains on HTTP/HTTPS: http://old.com -> https://new.com https://old.com -> https://new.com I have to specify the SSL key/certificate for the old domain but I'm not sure where I have to place these directives: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/new.com.pem" ssl.ca-file = "/etc/pki/tls/certs/new.com.crt" } $SERVER["socket"] == ":80" { $HTTP["host"] =~ "old.com|new.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } } I was trying to add the code below but Lighttpd reports configuration errors: $SERVER["socket"] == ":443" { $HTTP["host"] =~ "old.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/old.com.pem" ssl.ca-file = "/etc/pki/tls/certs/old.com.crt" }

    Read the article

  • Possible to redirect from HTTPS to HTTP behind load-balancer?

    - by Derek Hunziker
    I have a basic ASP.NET application that sits behind an F5 load-balancer. Incoming SSL requests (over HTTPS) terminate at the load-balancer and all internal communication between the load-balancer and my application servers is unsecure (over HTTP). When a unsecure request comes in, my app is able to use Response.Redirect("https://...") to redirect a secure URL with no problems. However, the other direction appears to be impossible - I cannot redirect from HTTPS to HTTP using Response.Redirect() from my application. The URL remains HTTPS for the client and does not change. Could the F5 be preventing the redirect for ever reaching the client? Is there any special configuration necessary to let this happen?

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >