Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 459/817 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Release 51 of Sun Rack II capacity calculator available

    - by uwes
    A new release of the Sun Rack II capacity calculator is available on eSTEP portal. Just uploaded release 51 of the calculator. The following changes have been integrated: Added LOD date of 30 NOV 2014 for ST25xx M2 (NEP LOD – other customers LOD is 31 MAY 2014) Moved 7420 to EOL HW due to met LOD Bug correction : X4-2 and X4-2L weren’t working. Bug correction : ES1-24 RU are now correctly shown (2 ES1-24 only takes 1 RU) The tool calculates all the data necessary (power requirements, BTU, number of rack units, needed power outlets etc.) while inserting the many different kind of HW equipment in aSun Rack II cabinet (version 1000 and 1200). It takes into consideration most of the available servers, storage devices, tapes, and Netra products. There are also a couple of third party products which are taken into account. The spreadsheet can be downloaded from eSTEP portal. URL: http://launch.oracle.com/ PIN: eSTEP_2011

    Read the article

  • Clean SOAP Calls from iOS - SudzC

    - by Richard Jones
    This is worth another mention. If you need to call SOAP web-services from iOS or Javascript, and lets face who doesn't. http://SudzC.com really delivers. You give it the URL to you're WSDL file (or upload a file) and it just spits out a ready to go Xcode project. I would point out that to get it to work 100% I changed line 204, in Soap.m (commented out line is old version, mine is below) //if([child respondsToSelector:@selector(name)] && [[child name] isEqual: name]) { if([child respondsToSelector:@selector(name)] && [[child name] hasSuffix: name]) { I consumed a Microsoft Dynamics NAV set of web-service pages no problem (and they tend to be fairly complex WSDL definitions).

    Read the article

  • Grub can't find device on boot resulting in Grub Rescue

    - by user1160163
    So I have 2 hard drives a HDD 320GB and a SSD 20GB. Before I had Windows 7 on the HDD and Ubuntu on the SSD but wanted to get rid of windows and reinstall a clean Ubuntu on the SSD then use the HDD for storage. So I deleted everything from the HDD and set up the SSD with 18GB ext4 and 2GB Swap and installed Ubuntu on the 18GB ext4. Though now when I boot up I get "Error: No such device Grub Rescue" I have a live USB and I ran the Boot Repair following these instructions - grub rescue after install of Ubuntu 12.04 (dual boot) - it says successful though still have the same problem. This is the given URL from Boot Repair - http://paste.ubuntu.com/1257988/ Thanks for any help given.

    Read the article

  • 301 redirects - can we not delete old pages?

    - by KBS
    First time here :) We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. Step1: example.com/2013-related-information.html is redirect to example.com/2014-related-information.html Step2: example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not wasting the record copy if someone wants to go and dig up old page. Would appreciate help!! Cheers

    Read the article

  • Will removing unused query string parameters negatively affect SEO?

    - by trm
    Will changing links to remove query string parameters that are no longer used have any negative impact on search engine rankings? Say I have a page about.php on my site, and all of my links to this page are of the form http://www.example.com/about.php?foo=bar and I've made some changes to the script such that the parameter foo is no longer used. I would like to remove the unused parameter from the links so the URL will look cleaner, but I am concerned that this could cause problems with SEO. Is it safe to remove ?foo=bar from my links?

    Read the article

  • VirtualBox Port Forward

    - by john.graves(at)oracle.com
    A great new feature in VirtualBox 4.0 is the ability to use NAT networking and forward ports without needing to use ssh -L/-R tricks.  This is great for booting multiple VM domains simultaneously.  It is possible to have several instances which map back to the host machine and different ports on localhost:* automatically forward to the correct VM.  This avoids the hassle of setting up dns entries or static IP addresses.In this example, I'm mapping the host ports 3xxxx to the VM's well known server ports.Note: It is important to setup the Frontend HTTP host/port to avoid incorrect URL rewriting.You may also need to setup an http channel to deal with local traffic which uses the network address 10.0.2.15Happy VMing.

    Read the article

  • Can SSL Wildcards have multiple/nested levels of wildcard?

    - by Don Faulkner
    I know that an SSL wildcard certificate (*.example.org) can be used to support many names under the domain (a.example.org, b.example.org, c.example.org). I also know that the * is only good for matching a single level of name. That is, *.example.org will not work on a.b.example.org. What if I used a certificate with the name ..example.org? I'd like to build a certificate with the following name configuration: CN=example.org subjectAltName=DNS:example.org, DNS:*.example.org, DNS:*.*.example.org, DNS:*.*.*.example.org I've tried building a few like this as self-signed certificates, but I've not had good results. For example, chrome tells me "Server's certificate does not match the URL." Is it possible to have nested wildcards in a certificate, or do the popular browsers not support this?

    Read the article

  • help redirecting IP address

    - by Alice
    Google has indexed the IP address of my site rather than the domain, so now I'm trying to set up a 301 redirect that will redirect the IP address and all subsequent pages to the domain. I currently have something like this in my .htaccess file (however don't think it's working correctly?): RewriteCond %{HTTP_HOST} ^12.34.567.890 RewriteRule (.*) (domain address)/$1 [R=301,L] I've used various redirect checker tools and keep getting the message: "... not redirecting to any URL or the redirect is NOT SEARCH ENGINE FRIENDLY" Am I doing something wrong or is there something else I should be trying? Thanks! Alice

    Read the article

  • Ubuntu Server Wireless connection issue - replaced router but kept ESSID

    - by Stevo
    I have a ubuntu server 12.04 which was connected to my wireless network with no problem I replaced the wireless router but kept the ESSID and password the same. All other devices on network have connected correctly. However the Ubuntu Server will not route correctly. It will connect to the wifi router, and get a dhcp served IP address, however it will not route anything. I cannot ping the router from the server. the contents of /etc/resolve.conf are updated with the information from the router, (the host name has been served) I know there is nothing wrong with the router or the server, or the wireless card etc. I'm assuming there's some cached setting that associates the old router with the ESSID and causing the issue. I've got a lot of other devices connected to the router, so don't want to change the name of the ESSID. How do I fix this? EDIT: outputs (abbreviated as I've got no cut and paste) netstat -rn: Kernel IP Routing table Dest Gate Gen Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0. UG 0 0 0 wlan0 192.168.0.0 0.0.0.0. 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes

    Mise à jour du 07.05.2010 par Katleen Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes Il y a quelques heures, les trois premiers noms de domaines non-latins on été placé dans la root zone du DNS. Ils sont donc désormais en service, et fonctionnent parfaitement. Voici un exemple de ce que vous pourrez voir dans le champ d'URL de votre navigateur, si vous visitez l'un de ces sites : [IMG]http://blog.icann.org/wp-content/uploads/2010/05/idn-example-450px.png[/IMG] Ces trois nouveaux domaines sont السعودية. (?Al-Saudiah?), امارات. ( ?Emarat?) et ...

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Calling a web service through a reverse proxy

    - by Ken
    I had a w/s that when I first read the WSDL in test, was http, but needed to be accessed from behind a reverse proxy with https.  Here are the steps: Change the app.config, <httpTransport> to <httpsTransport> Change the app.config and the url address in the <endpoint>to the reverse proxy address Add System.Net.ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };  to disable certificate validation.  This will validate all certificates (including invalid, expired or self-signed ones).

    Read the article

  • Extracting meta tags attribute using wget [migrated]

    - by Amit
    I have a file having some URLs per line. I need to extract the "keywords" present in the tags i.e. if there is meta tag for "keywords" then i want to get "content" value for it. Example: if the web-page has this meta-tag then for that URL i want "wikipedia,encyclopedia" to be extracted. One approach is to download the web-page using "wget" and then parse it using some standard HTML parser. I was wondering is there any better way to do this without downloading the entire web-page.

    Read the article

  • Postfix auto create Maildir

    - by Eugene
    I've been beating my head against a wall for a while now on this one. Basically, here is the rundown: Our MX record points to a frontend SMTP server, which contains aliases for actually routing the mail. No alias, no access to the backend storage server, which is what our clients connect to. I'm upgrading the backend email server. Currently, a user is created for every email user on the server, which creates the mailbox. On the new server, everything autheticates through PAM to an LDAP server (all of which is working properly). My goal is to get Postfix to create the Maildir directory for the user automatically. This works fine when I have the /home directory with 777 permissions, but for obvious reasons, this should be avoided. I would like to do this with 775 permissions on /home with a group owner of whatever user Postfix is running as, but I can't seem to figure out what user to use. With the 777 permissions, the /home/$user/Maildir directory is created on message delivery. Does anybody know how I can do this without 777 permissions? The system I am working on is a 64-bit Debian Lenny 5.07 install. Any advice would be appreciated.

    Read the article

  • Stop Google Analytics from appending hostname?

    - by Nick Q.
    I've come across an Analytics profile that is appending the rest of a URL to the end of a page's path. For example when looking at the page that exists at http://example.com/page I would expect to see /page but instead it shows me /page/http://example.com/. The profile has no filters applied to it, and until July was reporting as expected (/page), in July the site in question switched hosts (and absolutely nothing else, so I'm not sure that's the problem). The analytics code on the site is the standard Google Async code with a domain set. All other profiles for the site show /page as expected. Any ideas as to how I can get the profile to function as expected?

    Read the article

  • Vmware Workstation, Win7 host, Ubuntu guests with Nat + Host-only networks but they cannot connect to the Internet

    - by Ikon
    I have a Win7 host machine with Vmware Workstation. In the workstation I have 3 Ubuntu installed. All 3 Ubuntu guests have a Nat network - to access the internet without asking the router for a local address - and a Host-only network - to connect all Ubuntu quests and the host in a private network for internal communication, without touching the router. When I try to make any of the Ubuntu quests to get data from the internet - assuming that they would figure out that the Nat-ed interface can access the requested data - they fail and report that there is no route to my query. If I disconnect the 2nd interface on the Ubuntu guests with the Host-only network and restart networking, they start to know the route to the internet. Odd, during the installation of the guests they asked which of the 2 given interfaces - with Nat and Host-only config - should be used to get updates during installation and they oddly managed to get the updates. Not so after the installation has finished and rebooted. I have checked the Virtual Network Editor that the Nat interface should use my real network card to access the net, so there should be no problem. I wish not to use the router's dhcp service to give the Ubuntu quests an address, and also I don't want the guests to be accessable from the local network directly, but only by the host - that's the Host-only network is for. Any suggestions? Edit: 192.168.189.0 is the Nat interface and 192.168.7.0 is the Host-only. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.7.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.189.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.189.2 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • SEO Benefits of adding a Tumblr feed to site

    - by Paul
    A client of ours has a CMS driven Blog in his hotel site - he would like to use the blog to add depth top his site and add seo benefits relating to the blogs content. The current blog is a basic header / text field and doesn't contain any tagging / meta features. Unfortunately we dont have a .net developer in our team to alter the existing blog and add meta / tagging and there isn't budget to hire one - so I considered using a Tumblr blog - setting it up externally - giving it a blog.hotelname.com address and feeding it into the existing page via tumblrs js - which basically does a document.write into the page - which we can style. I understand from a previous post (Poor CMS blog vs Tumblr embed as a general rule most search engines ignore JS created content - but will the above approach act as an improvement on the existing system for now - as the blog will be setup externally with its own url and also feed into the existing site? Cheers Paul

    Read the article

  • How to tell google that i have changed my website urls ?

    - by Momen M El Zalabany
    I have done major updates in my website, and renamed all my urls. Problem is how can i tell google to i have renamed all urls and let google refresh his library ? I have uploaded sitemap vai google webmaster tools many times. My Website url : http://www.pndmasr.com My sitemap http://www.pndmasr.com/sitemap.xml but still every time i search google for "pndmasr" i get old pages results, i have waited more than 3 days by yet same problem. any solutions ? is there a problem with my sitemap ?

    Read the article

  • SAML Request / Response decoding.

    - by Shawn Cicoria
    When you’re working with Web SSO integration, sometimes it’s helpful to be able to decode the tokens that get passed around via the browser from the various participants in the trust – RP, STS, etc. With SAML tokens, sometimes they’re simply base64 encoded when they’re in the POST body; other times they’re part of the query string, which they end up being base64encoded, deflated, then Url encoded. I always end up putting together some simple tool that does this for me – so, this is an effort to make this more permanent. It’s a simple WinForms application that is using NetFx 4.0. Download

    Read the article

  • Can't reliably ping 6224 router from directly-attached system

    - by David Mackintosh
    OK, here's my situation. This is on the internet. The 6224 is the router in this picture and physically resides in Kanata. Both VLAN 1697 and 3994 are provided by an internet service provider. These VLANs are provided through a single 1Gb ethernet wire. The Kanata hosts are directly attached to the 6224; the other two sites are remote. VLAN 3994 is a single IP address space, so theoretically it shouldn't matter physically where the hosts on that subnet are. Here's the problem. I have a monitoring system which is connected further into the internet, so probes from the monitor would come in to this diagram on the 1697 VLAN. When I ping hosts at Albert or Bells Corners from the internet, there is 0 loss. The connection looks perfect. When I ping hosts at Kanata, I lose anywhere from 10 to 40% of the pings. The loss is not predictable, but: when I do lose them, I always lose at least 3, usually 4, rarely more, pings in a bunch. I have attached a monitor directly to the 6224 in Kanata on 3994.. When the monitor pings the 6224 routing interface, I see exactly the same loss pattern -- but NOT at the same time as the loss from the remote system. Ping time is around 1ms. When the monitor pings another system directly attached to the 6224, there is 0 loss. Ping time is about 0.1ms, one-tenth of the time to ping the router. Anyone know what is going on here?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • Website .htaccess file for Wordpress sub folder

    - by ubique
    I developed a Flash website for a client and added the following .htaccess file in the root directory and the non-www to www redirect works perfectly. RewriteEngine On RewriteCond %{HTTP_HOST} ^website.com [NC] RewriteRule ^(.*)$ http://www.website.com/$1 [L,R=301] I was also asked to add a Wordpress blog so I put it in a new directory folder (as opposed to a sub domain) with so the URL is www.website.com/blog Does Google now see the main site and blog as two different websites? Do I need to link them together using another .htaccess file in the Wordpress Root so Google automatically crawls the whole domain? Any help appreciated....

    Read the article

  • "find" command and piping its output through another program

    - by Charbel
    this is not an Ubuntu specific quesion, it applies to all unix/linux. how can I run a command like this: find . -maxdepth 1 -type d -print -exec svn info "{}" | grep URL \; the command above doesn't do what I want, I can't seem to pipe the output of the svn info to grep. This works, but the output contains much more than I need: find . -maxdepth 1 -type d -print -exec svn info "{}" \; Any ideas?

    Read the article

  • HTML5 - check if font has loaded

    - by espais
    At present I load my font for my game in with @font-face For instance: @font-face { font-family: 'Orbitron'; src: url('res/orbitron-medium.ttf'); } and then reference it throughout my JS implementation as such: ctx.font = "12pt Orbitron"; where ctx is my 2d context from the canvas. However, I notice a certain lag time while the font is downloaded to the user. Is there a way I can use a default font until it is loaded in? Edit - I'll expand the question, because I hadn't taken the first comment into account. What would the proper method of handling this be in the case that a user has disabled custom fonts?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >