Search Results

Search found 19375 results on 775 pages for 'codeigniter url'.

Page 419/775 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • How to do a cacheable redirection?

    - by John Doe
    When users enter my website example.com, their "preferred" language is detected and they are redirected (using a 301 Moved Permanently redirection) to example.com/en/ (for english), example.com/it/ (for italian), etc. It works perfectly, but when I analized my website with the Google Page Speed tool it gave me the following advice. Many pages, especially mobile pages, redirect users to a different URL, for instance from www.example.com to m.example.com. Making this redirect cacheable by the user's browser can speed up page load times for repeat visitors to a site. And later it says We recommend using a 302 redirect with a cache lifetime of one day. The redirect should include a Vary: User-Agent header as well as a Cache-Control: private header. So my questions are, how can I do a "cacheable" redirection in PHP? Would the following be enough? header("HTTP/1.0 302 Moved Temporarily"); header("Location: example.com/whatever"); exit;

    Read the article

  • How to use MythBuntu to send TV signal to a 2nd frontend

    - by Mark Preston
    I guess the a MythTV or MythBuntu backend acts as a "server" for the frontends. I have MythBuntu installed. It runs fine, I can tune live TV, hear the sound, etc. To get this to work, I had to config the Wired Network IP4V settings to Method: Link-Local Only. The Local Backend IP address is: 127.0.0.1 and the info (bottom of screen) says that if there is another frontend, that this IP add. must be changed. 1 - Does this mean changed to the IP address of the 2nd frontend? 2 - What "Method" do I use to make 2 or more frontends? 3 - I have an ethernet switch which currently "sees" the tv signal, sends it to the computer's ethernet port where Mythbuntu makes use of it. 4 - How do I set up the Myth to send it's output (the tv shows) to both televisions? If you know of a How-To, or website, please give the URL or identifying keywords.

    Read the article

  • Apache2 rewrite without htaccess

    - by inorganik
    Reading up on doing url rewrites in Apache2, I found this: "In general, you should never use .htaccess files unless you don't have access to the main server configuration file. Okay, great. But there is no information anywhere about how to do it in the server configuration file. So before I mess stuff up, can I safely use the same rewrite directives, like <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteRule ^test/(\w+)$ test.php?n=$1 [L] </IfModule> in /etc/apache2/apache2.conf? /etc/apache2/httpd.conf is blank, but I suppose I could do it there too? Another question, should the rewrite rule paths be prefixed with /var/www/ or can I do it relative to the site root? Thanks.

    Read the article

  • Buck Woody in Adelaide via LiveMeeting

    - by Rob Farley
    The URL for attendees is https://www.livemeeting.com/cc/usergroups/join?id=ADL1005&role=attend . This meeting is with Buck Woody . If you don’t know who he is, then you ought to find out! He’s a Program Manager at Microsoft on the SQL Server team, and anything else I try to say about him will not do him justice. So it’s great to have him present to the Adelaide SQL Server User Group this week. The talk is on the topic of Data-Tier Applications (new in SQL 2008 R2), and I’m sure it will be a great...(read more)

    Read the article

  • Log incoming requests

    - by Maxim Eliseev
    We have Tomcat running on Ubuntu server. It runs a web service, open to the internet. Sometimes it has sudden spike of traffic and goes down. There is nothing unusual in Tomcat access logs. I guess because some of the requests are so 'heavy' that they never finish and hence are not recorded to Tomcat access logs. Is there a way to configure Ubuntu to log incoming requests in the following format (below)? Date, Time, URL (with query string params), IP address (of client) There should be one line per request. Each request should be logged before it is executed. Only incoming requests to ports 80 and 443 should be logged.

    Read the article

  • Permanent redirect domain to www subdomain without web.config

    - by Lord Simpson
    I've just set up a site via 1and1 and have run into an issue, I want to accomplish the simple task of redirecting the root domain to the www sub domain however due to complications I cant seam to find a way to get it to work. I'm on a Microsoft (asp.net) package so can't use .htaccess, also the IIS server they have doesn't have the URL redirect module installed (so can't use <rewrite> in web.config). They have built in HTTP forwarding options however if I set the root domain to redirect to the www sub domain it just infinitely redirects. Hopefully there is some obvious option/method I've missed during the past two days of searching!

    Read the article

  • How to handle possible duplicate content across multiple sites?

    - by ElHaix
    Let's say I have two sites that cover the same vertical/topic. one in the USA and one in Canada. Both sites have local-related content, which is obviously unique by location. However they will share common news or blog pages. How do I avoid getting hit with duplicate content on both sites for those news/blog pages? If the content is exactly the same, I'm guessing I would have to pick which site's content I want to noindex,nofollow, is that correct, and if so, is that all I have to add on the URL links to those pages, and the pages' meta tags?

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • How to remove HTML code from search result page content

    - by Jack Torris
    I have music website. There are 46 album pages and each page has different player and files. I just entered the one of album's URLs in a search engine. I found that Google is displaying player code in search result content. For example, enter this URL in Google and check the results. Each result displays a .mp3 file in content section. I see this: This page contains a demo of and documentation for the new jPlayer Playlist add-on, ... mp3:"http://www.jplayer.org/audio/mp3/Miaow-01-Tempered-song.mp3", ... I don't want Google to show the player code and mp3 files in search result. How can I hide audio files and player code from search engine? What would be the best solution for it?

    Read the article

  • What do I do if a user uploads child pornography?

    - by Tom Marthenal
    If my website allows uploading images (which are not moderated), what action do I take if a user uploads child pornography? I already make it easy to report images, and have never had this problem before, but am wondering what the appropriate response is. My initial thought is to: Immediately delete (not just make inaccessible) the image File a report with the National Center for Missing and Exploited Children with all information I have on the user (IP, URL, user-agent, etc.), identifying myself as the website operator and providing contact information Check any other images uploaded by that IP user and prevent them from uploading in the future (this is impossible, but I can at least block their account). This seems like a good way to be responsible in reporting, but does this satisfy all of my legal and moral responsibilities? Would it be better not to delete the image and to just make it inaccessible, so that it can be sent to the National Center for Missing & Exploted Children, the police, FBI, etc?

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • Which token from a long User-Agent should I use in robots.txt?

    - by Gaia
    The definition of User-Agent states that several tokens can be included, as deemed necessary by the client. I want to block certain bots via robots.txt and I am confused as to which part of the User-Agent string to use, especially for more obscure bots. For example: Mozilla/5.0 (compatible; uMBot-LN/1.0; mailto: [email protected])" JS-Kit URL Resolver, http://js-kit.com/ Mozilla/5.0 (compatible; SEOkicks-Robot +http://www.seokicks.de/robot.html Do I use the second token? Can tokens contain spaces, or did the SEOkicks folks forget a semicolon after SEOkicks-Robot? I don't actually intend on making my question specific to a couple bots - I want to know the guideline: which part of UA do I place in robots.txt for these exotic bots with UA as long as a haiku? User-agent: uMBot-LN/1.0 Disallow: / PS: Thank you but I do not need to hear that undesirable bots are better blocked with mod_security. I already have commercial mod_sec rules in place.

    Read the article

  • Do subdomains need to be defined through domain registerar?

    - by Johnny
    I have bought a new domain name from GoDaddy. Let's say it is abcd.com. On GoDaddy's DNS Managing page, I changed A(Host) part to @ = 74.125.232.215 which is www.google.co.uk's IP address. Now if I type www.abcd.com, it directly goes to www.google.co.uk. But if I type http://test.abcd.com, it cannot be loaded. Do I need to define every subdomain through GoDaddy? Is this how it work? P.S. Amazon EC2 directly generates a subdomain for users to reach their virtual PCs. It cannot be domain registerar dependant. P.S.2. Same question for using "www2" at the start of url.

    Read the article

  • Make my website dynamically loaded data available to Facebook Open Graph Object Scrapper

    - by fvaliquette
    Here is the design of my web site: The user enter myWebsite.com/a/1 .htaccess rules redirect to myWebsite.com/b Now the JavaScript ExtJS library is loading. Extracting the value from the URL (in this case it is “1”) Loading ./xml/1.xml From 1.xml setting the Open Graph data (Title, type, image, etc) Loading data that will be shown to the user from 1.xml into the website. My question is: How can I make the Open Graph data available to Facebook? Facebook do not to load my ExtJS JavaScript Library before extracting the Open Graph Object values from the HTML. Is there an easy solution to this problem? The only solutions I found is to make statics web pages or dynamically pages rendered on the server side but I would like to avoid these since my web page implementation is already finished and I would like to avoid re working on it.

    Read the article

  • Firefox keeps crashing

    - by RainThePain
    When I open FireFox it crashes about 5 seconds later, and this is the error: Add-ons: globalmenu%40ubuntu.com:3.6.4,langpack-en-GB%40firefox.mozilla.org:17.0.1,langpack-en-ZA%40firefox.mozilla.org:17.0.1,langpack-zh-CN%40firefox.mozilla.org:17.0.1,ubufox%40ubuntu.com:2.6,%7B972ce4c6-7e08-4474-a285-3208198ce6fd%7D:17.0.1 BuildID: 20121129151842 CrashTime: 1355583809 EMCheckCompatibility: true FramePoisonBase: 7ffffffff0dea000 FramePoisonSize: 4096 InstallTime: 1355581168 Notes: OpenGL: X.Org -- Gallium 0.4 on AMD RS780 -- 2.1 Mesa 9.0 -- texture_from_pixmap ProductID: {ec8030f7-c20a-464f-9b0e-13a3a9e97384} ProductName: Firefox ReleaseChannel: release SecondsSinceLastCrash: 598 StartupTime: 1355583804 Theme: classic/1.0 Throttleable: 1 URL: http://shop.ubuntu.com/ Vendor: Mozilla Version: 17.0.1 This report also contains technical information about the state of the application when it crashed.

    Read the article

  • Company Administrators: Stay Alert!

    - by Pete
    Some of our customers choose to use the Themes feature to rebrand their Training and Support Center link, and redirect it to an internal support site. If your company does this, we strongly advise that for your employees that have the Administrator role, you maintain a separate theme that keeps the Administrator's Training and Support link pointed to the CRM On Demand Training and Support Center, and not redirect it to an internal support site. Why? The company administrator needs access to the Training and Support Center because it gives them pod-specific application alerts on the Support tab and pod-specific release information on the Release Info tab. If a customer no longer has access to the Training and Support Center URL because they have already rebranded that link, they can contact Customer Care to request it again.  

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt_tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • Actions and Controllers managing strategy in MVC apps

    - by singleton
    Can anyone name any usefull strategy/architectural pattern for allocating actions between different controllers when using MVC pattern for developing web application? I am now developing web app using asp.net Mvc3 framework and still can't figure out how to manage actions and controllers. One approach is to create single action controller for each url, but it's not the best choice since to much controllers have to be created. Should I list all available urls that are supported by me web app, devide them into groups and create separate controller for each group or act in any different manner? It seems like I will become face to face with some kind of mess with no consistent approach in managing actions and controllers.

    Read the article

  • Slow DNS Resolution

    - by user4541
    After a clean install of 10.10 I'm finding DNS resolution takes quite a long time. Hitting any url takes a good few seconds (10 - 30) before the site is displayed. I'm thinking this is a DNS resolution issue due to the 'waiting' or 'looking up' text being displayed in Firefox and Chrome. I do not get this issue with Slackware Linux or Windows 7 so it is not network or DNS server specific issue. It's something on the client side. Looking around on Google I see there are a few other people with this issue. The ones that have reported a workaround by switching to openDNS are disabling IPV6 or dealing with another issue. Any help would be appreciated. My network card is wired: Broadcom Corporation NetLink BCM5906M Fast Ethernet PCI Express Thanks

    Read the article

  • What is the need for 'discoverability' in a REST API when the clients are not advanced enough to make use of it anyway?

    - by aditya menon
    The various talks I have watched and tutorials I scanned on REST seem to stress something called 'discoverability'. To my limited understanding, the term seems to mean that a client should be able to go to http://URL - and automatically get a list of things it can do. What I am having trouble understanding - is that 'software clients' are not human beings. They are just programs that do not have the intuitive knowledge to understand what exactly to do with the links provided. Only people can go to a website and make sense of the text and links presented and act on it. So what is the point of discoverability, when the client code that accesses such discoverable URLs cannot actually do anything with it, unless the human developer of the client actually experiments with the resources presented? This looks like the exact same thing as defining the set of available functions in a Documentation manual, just from a different direction and actually involving more work for the developer. Why is this second approach of pre-defining what can be done in a document external to the actual REST resources, considered inferior?

    Read the article

  • Microdata Without Reviews

    - by user36562
    I have not been able to find a clear issue on omitting reviews from Microdata. I understand that Microdata values for reviews will default to a certain number when omitted, but I was wondering if it would be correct/acceptable to completely omit the review node completely. I can see where reviews and average "star" ratings would be of help to the end user, especially for things like recipes. However, what if there are no reviews for a product or application? To be completely clear - let's isolate this question to only software applications or extensions. What if a particular piece of software or extension was not featured on an "app store" or other site that provided reviews? Wouldn't the formats still be helpful by providing version number, download URL, compatible software - etc? Sorry for the lengthy background, but I just don't understand why it seems that reviews must be part of a Microdata markup. Or, am I wrong in this assumption?

    Read the article

  • How easily recognized are new TLDs?

    - by Ryan Muller
    I'm interested in purchasing a domain name for a new service I intend to market. I know that .com is instantly recognizable as a domain ending, and if I see stackoverflow.com I know it's a web address. However, I also recognize strings like github.io and mysite.tk as domains, since I've worked with domains like these. To the average member of the public, if one sees an address ending in .io or similar, non-mainstream TLD (e.g. on a billboard or business card) would they immediately know it's a URL and to type it into a browser? Or are these new domains only useful 1) for a technical audience or 2) when you will be primarily promoting your site through links and not print?

    Read the article

  • Three apps going through apache. How to configure apache httpd? [migrated]

    - by Chris F.
    I have a quick question but I've been struggling to find the best solution: I have two java webapps and wordpress (php) that I need to serve through my Prod website: App #1 should be accessed when pointing to www.example.com/ (this would have other url too such as "www.example.com/book") App #2 should be accessed when pointing to www.example.com/manage Finally WordPress would be accessed at www.example.com/info How can I configure apache to serve all these three instances at the same time? So far I have and it's not quite working right. Any suggestions would be much appreciated! Listen 8081 <VirtualHost *:8081> DocumentRoot /var/www/html </VirtualHost> ProxyPass /manage http://127.0.0.1:8080/manage ProxyPassReverse /manage http://127.0.0.1:8080/manage ProxyPass /info http://127.0.0.1:8081/info ProxyPassReverse /info http://127.0.0.1:8081/info ProxyPass / http://127.0.0.1:9000/ ProxyPassReverse / http://127.0.0.1:9000/

    Read the article

  • Does having multiple URIs mapping to the same resource help SEO?

    - by Brian Wheeler
    Let's say I have a site with products that have tags, if each resource is available at GET '/products/tagged/:tag_list/:product_permalink' Could that be better for SEO than just one permalink? For example a product tagged "tea" and "coffee" would be available at GET '/products/tagged/tea/:product_permalink' GET '/products/tagged/coffee/:product_permalink' GET '/products/tagged/tea/coffee/:product_permalink' GET '/products/tagged/coffee/tea/:product_permalink' I would imagine that google would appreciate this because it gives multiple URIs with different levels of detail about the product, but I cant really be certain. Anyone have any direct knowledge on the topic? --EDIT-- As John Conde points, this is a horrible idea. What about having the links on my site link to a route such as GET '/products/tagged/:full_tag_list/:product_permalink', and then any time a user changes tags just have a HTTP moved permanently status to the new URL. Therefore duplicate URLs would be highly unlikely and mitigated by the proper response. Would this be better?

    Read the article

  • Log oddities: 404s for client-garbled image URLs

    - by Chris Adams
    I've noticed some odd 404s which appear to be broken URL rewriting code: Our deep zoom view generates images URLs like this: /media/204/service/dzi/1/1_files/7/0_0.jpg I see some - well under <1% - requests for slightly altered URLs: /media/204/s/rvice/d/i/1/1_files/7/0_0.jpg These requests come from IP addresses all over the world (US, Canada, China, Russia, India, Israel, etc.), desktop and mobile users with multiple user-agents (Chrome, IE, Firefox, Mobile Safari, etc.), and there is plenty of normal activity in the same session so I'm assuming this is either widespread malware or some broken proxy service. I have not seen them from anything other than images, which suggests that this may be some sort of content filter. Has anyone else seen this? My CDN logs show the first request on June 8th ramping up from several dozen to several hundred per day.

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >