Search Results

Search found 50115 results on 2005 pages for 'http referer'.

Page 410/2005 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Question about mod_rewrite rule for redirecting failing pages

    - by SimpleCoder
    I'm setting up a mod_rewrite rule that redirects failing pages to a custom Page Not Found page. This is with Wordpress. I'm using the guide here: http://httpd.apache.org/docs/2.2/rewrite/rewrite_guide_advanced.html#redirect404. My rule so far looks like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+) http://example.com/?page_id=254 [R] This works. It seems to be a combination of the first and second suggestion that worked, since the -U flag did nothing. My question is, out of curiosity why the following happens: When I change REQUEST_FILENAME to REQUEST_URI (as the second example suggests), the page loads, but none of the style sheets load. All of my formatting is gone, and this happens on every page. Can anyone think of why this might happen?

    Read the article

  • Best strategy to discover a web service in a local network?

    - by Ucodia
    I am currently doing some research for a project. The setup is simple, I have a computer running a service in my home network and any device connected to that same network should be able to discover the service automatically and use it. I have no specific technology requirement whether it is on the server or client side. The client knows about the service definition. Other than that I have no idea what strategy to use, what technology to look at or whether I should go for a SOAP or a HTTP based service. I think going HTTP with REST API is the best for targeting all devices but I am opened to any suggestions. Thanks.

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • problem showing my website correctly in search engines

    - by dinbrca
    Hello guys, I have a website which i have indexed on google for example (like 15 days ago). some of my pages pass arguments like: http://www.bla.com/products.php?pro=bla&page=view suddently i saw that passing arguments like this isn't good for SEO purposes and started using htaccess rewrite. and changed the arguments to like this: http://www.bla.com/products/bla/*view*/ now my site on google still shows as i showed at link number 1 what should i do? i thought i should wait for the search engine to crawl my site again but nothing happened. thanks in advanced, Din

    Read the article

  • Should I Learn C/C++ Even If I Just Want To Do Web Programming?

    - by Daniel
    My goal is to be able to create online apps and dynamic, database driven websites. For instance, if in the future I get the idea for the next Digg or Facebook, I want to be able to code it myself. To arrive there I think I have basically two paths: Path 1 Start at a basic level, learning C, then C++ for OOP, then algorithms and data structures, with the goal of getting a solid grasp of computer programming. Only then move to PHP/MySQL/HTTP and start working on practical programming projects. Path 2 Start directly with PHP/MySQL/HTTP and getting my hands dirty with practical projects right away. What would you guys recommend?

    Read the article

  • Problems after installing Ubuntu LiveUSB on my 2nd HD

    - by user113106
    I decided to create a Ubuntu USB installer using the Universal USB Installer, selecting a Ubuntu 12.10 ISO. I selected my D: drive, my NON-windows7 carrying drive as install. After this I Re-booted my system and my PC began to Run the Ubuntu Boot-startscreen every time I power up the machine, giving errors like this: Root=Unknown I used my girlfriend's laptop to create, on the exact same way, a real! USB Ubuntu installer. Booting from that USB, picking the option Run Ubuntu from this USB I get the following error: http://postimage.org/image/63qkv98c1/ Let's try installing it from that USB to a Hard-Disk: http://postimage.org/image/usqbwymfx/ As I said: I do not have the Option to pick my Boot-section, at this very moment I can only access to the Ubuntu installation and nothing else. I've read about 90% of other Questions that could've been related, but I could not find a solution. BTW I'm running a Acer Aspire 2Quadcore 4gb DDR 2 Ati Radeon HD, 64bit and I've set my Bios OS-usage from Windows to "Others"

    Read the article

  • Is it possible to track redirects to external sites from our subdomains?

    - by ChaBuku
    I have a handful of subdomains set up as redirects because we are using them for QR codes. I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each. Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first? I have the redirects set up through cPanel presently using subdomains. Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?

    Read the article

  • Google analytics - vistor path to specific site destination setup and monitoring?

    - by Joshc
    I have a website which I am using google analytics to track visitors and track our banner campaigns. We're are promoting 'Purchase Ticket' buttons on our website which push visitors to a third party website who sell and distribute our tickets. The url on all the 'Purchase Ticket' buttons are the same through out the site... Example: http://ticketmaestro.com/events/my-event-2012 In the analytic control panel, is it possible so set something up, where I create a path-to-destination using the above example url? ...and then after this is setup: I want to be able to monitor the path visitors are taking from when they reach the site - to when they click the 'Purchase Ticket' button. Graphs will show... Start Destination Path to Final Destination Final Destination: http://ticketmaestro.com/events/my-event-2012 Any help, suggestions, terminology would be great thanks. Josh

    Read the article

  • Read data from a folder in main domain folder (CPanel\WHM)

    - by Memphis Raines
    I have defined a host in my CPanel\WHM server and put all my websites under one host account. The host Main Domain is domain.com, and all other websites are Add-on Domains: domain.com --folder --domain1 --domain2 --domain3 ... The thing I need is that when calling domain.com in browser, the server read files from another folder. for example when call http://domain.com it shows us http://domain.com/folder BUT I don't mean a redirection, I want server do this in background without showing visitors the real path. I couldn't do this with Domain WildCard Redirection because it got error. How can I do this? With htaccess or ... ?

    Read the article

  • Cannot install "ATI/AMD proprietary FGLRX graphics driver" (SystemError)

    - by Fisherman John
    I have just installed a fresh copy of 12.04.1 64bit. I formatted my PC completely and enabled updates during the installation. After the installation was complete, I went on updating my software. However, when I wanted to install the additional drivers using the Additional Drivers tool (namely "ATI/AMD proprietary FGLRX graphics driver"), it gave me this error: SystemError: E:Unable to correct problems, you have held broken packages. The same error shows up if I try installing the post-release updates driver. Installing the drivers from the terminal results in this output: XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." I get this using "sudo apt-get update": http://pastebin.com/AWAtDXjY But "sudo apt-get install fglrx" still get me this error: http://pastebin.com/RYM55bVN & "sudo apt-get -f install fglrx" gives me this error: http://pastebin.com/xxekajvP Any help would be greatly appreciated. (Please note that I'm new to Linux, coming directly from Windows. I have tried Ubuntu twice or so before, but it was not for a long period of time. The drivers got installed smoothly the few times I've tried Ubuntu, but post-release updates never worked for me.) [I am going to work now, so I can only answer from my phone. Can't really test any new solutions you may give me until ~10 hours from now on. Maybe more.] @stonedsquirrel When I try to run that command, I get this error: "XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." ie. I get the same error. ( I am Fisherman John, dunno how to login & thereby respond to your comment again _ )

    Read the article

  • Partial Function Application

    - by AngelEyes
    This is long overdo... Just a short and simple example of Partial Function Application.   For some good explanations, which also include the difference between Currying and Partial Function Application, check out: http://msmvps.com/blogs/jon_skeet/archive/2012/01/30/currying-vs-partial-function-application.aspx and also this answer on stackoverflow: http://stackoverflow.com/a/8826698/532517   And here is the example, taken from real code:           internal void Addfile(string parentDirName, string fileName, long size)         {             AddElement(parentDirName, fileName, (name, parent) => new File(name, size, parent));         }           public void AddDir(string parentDirName, string dirName)         {             AddElement(parentDirName, dirName, (name, parent) => new Directory(dirName, parent));         }           private void AddElement(string parentDirName, string name,                                 Func<string, FileSystemElement, FileSystemElement> elementCreator)         {             ValidateFileNames(parentDirName, name);             var parent = (Directory)_index[parentDirName];             FileSystemElement element = elementCreator(name, parent);             parent._elements.Add(element);             _index.Add(name, element);         }

    Read the article

  • SharePoint Search Problem: The start address sps3://server cannot be crawled.

    - by Clara Oscura
    With this post, I'm going to start a series on problems I have encountered with SharePoint search. Error: The start address sps3://luapp105 cannot be crawled. Context: Application 'Search_Service_Application', Catalog 'Portal_Content' Details:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.   (0x80041205) (Event ID: 14, Task Category: Gatherer) Solution: give appropriate permissions to User Profile Synchronisation Service http://social.technet.microsoft.com/Forums/en-US/sharepoint2010setup/thread/64cdf879-f01e-4595-bc52-15975fefd18d http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/03/29/how-to-set-up-people-search-in-sharepoint-2010.aspx

    Read the article

  • Wordpress Multisite (Subfolders) - Google Analytics Tracking

    - by mmundiff
    I have a Wordpress multisite subfolder instance that I would like to track via Google Analytics. I guess optimally this would be a plugin which I could track each site in two places The Main Tracking Code which totals all traffic from the Multisite instance The Individual Site tracking code to see how each site specifically is doing. I think this plugin would have worked for me if I had a subdomain multisite instance: http://wordpress.org/extend/plugins/google-analytics-multisite-async/installation/ I know I can manually place the dual tracking code (http://www.markinns.com/articles/full/adding_two_google_analytics_accounts_to_one_page) but that would involve editing a theme and I have multiple sites using TwentyEleven template. I don't think I can edit the theme and not have it wreak havoc on the rest of the sites using TwentyEleven. So has anyone done this? Is there a a technique I'm missing? Is there a plugin available to do this in Multisite Subfolder installations? Is there a way to manually insert GA codes into themes which are used by multiple sites? Any insight is appreciated.

    Read the article

  • Typescript - A free add-on for Visual Studio 2012

    - by TATWORTH
    At http://www.microsoft.com/en-us/download/details.aspx?id=34790, Microsoft are providing a free add-on for Visual Studio. If you have any version of Visual Studio 2012, it provides an editor for Typescript."TypeScript is a language for application-scale JavaScript. TypeScript adds optional types, classes, and modules to JavaScript. TypeScript supports tools for large-scale JavaScript applications for any browser, for any host, on any OS. TypeScript compiles to clean, readable, standards-based JavaScript. Try it out at http://www.typescriptlang.org/playground."I look forward to type-safe JavaScript!There is a tutorial for it at http://www.typescriptlang.org/tutorial/

    Read the article

  • Is there a way I can sort traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools??

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that i can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Does Google Tag Manager help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york Thanks all!

    Read the article

  • Some post-VS2010 Launch Resources

    Here are some useful links related to the Vermont .NET VS2010 launch meeting on Monday night with our RECORD Breaking attendance! :) MSDN Visual Studio Developer Center: msdn.microsoft.com/vstudio VS2010 Comparison of various SKUs: http://www.microsoft.com/visualstudio/en-us/products VS2010 Trial Downloads: http://www.microsoft.com/visualstudio/en-us/download Great links from MicrosoftFeed.Com VS2010 Wallpapers for the hardcore: 10+ Beautiful Microsoft Visual Studio 2010 Wallpapers …and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Directory access control with Apache: do I need to use a specific .htaccess?

    - by Mirror51
    I have an Apache webserver, and in the Apache configuration, I have Alias /backups "/backups" <Directory "/backups"> AllowOverride None Options Indexes Order allow,deny Allow from all </Directory> I can access files via http://127.0.0.1/backups. The problem is everyone can access that. I have a web interface, e.g. http://localhost/adminm that is protected with htaccess and password. Now I don't want separate .htaccess and .htpasswd for /backups, and I don't want a second password prompt when a user clicks on /backups in the web interface. Is there any way to use same .htaccess and .htpasswd for the backups directory?

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

  • Failing to install Ubuntu 13.04

    - by Kayven Riese
    I have a new Windows 8 Sony Vaio SVF14A15CXB laptop that has UEFI and I have been struggling through an Ubuntu installation. I have a bootable Ubuntu DVD+R and I have managed to mess with my BIOS/UEFI so that it boots. I have used Windows 8 to create a desired hard disk partition and installed Ubuntu there, and have burned a boot-repair http://sourceforge.net/p/boot-repair-cd/home/Home/ and rEFInd http://www.rodsbooks.com/refind/getting.html disk and neither will boot. I know I should continue googling and struggling, but I am getting frustrated. Thanks to anyone who gives me the time of day.

    Read the article

  • APress Deal of the Day - 19/Nov/2011 - Beginning GIMP

    - by TATWORTH
    Today's$10 Deal of the Day from APress at http://www.apress.com/9781430210702 is "Beginning GIMP". "In this fully-updated second edition, author and long-time member of the GIMP community Akkana Peck introduces the GIMP and shows you everything about it that you'll want to know—including how to prepare images for display on web pages, touch up digital photos, tap into powerful filters, effects, and plug-ins, and automate tasks using scripts." For those of you unfamilar with GIMP it is the GNU Image Manipulation Program and it is available for free from http://www.gimp.org/downloads/   Can't code withoutThe best C# & VB.NET refactoring plugin for Visual Studio

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

  • Make public webcam. Which protocol, which codec. (Using VLC)

    - by gsedej
    Hi! I want to use my old (1GHz) PC as webcam video stream server (like you can see those road cameras). I thought of using VLC and already tried using http output but it was not really good. Too cpu hungry, too big stream (kBps), not stable... I been reading VLC how-to's but thre is still a question. Which output should I use? Http, RTSP, UDP? I want to make for more than one computer at the same time (multicast). Which codec should be good? PC is not so fast so it shouldn't be too cpu hungry codec. Mpeg2, mpeg4, xvid? how much video buffer should I use (vb=?)? What about setting IP and ports? So I need some help with ideas, but if someone can make a VLC command line it's even better :) Oh, computer has direct internet connection and own IP.

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >