Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 553/817 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • How to install latest version of imagick on centos 5.8 64bit using bash

    - by user57221
    How can I download and install latest version of imagick on centos 5.8 64bit using bash for php 5.4. >yum info php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.ellogroup.com * epel: mirror01.th.ifl.net * extras: mirror.ellogroup.com * updates: mirror.ellogroup.com Installed Packages Name : php Arch : x86_64 Version : 5.4.3 Release : 1.el5.remi Size : 8.8 M Repo : installed Summary : The PHP HTML-embedded scripting language. (PHP: Hypertext Preprocessor) URL : http://www.php.net/ License : PHP Description: PHP is an HTML-embedded scripting language. PHP attempts to make it : easy for developers to write dynamically generated webpages. PHP also : offers built-in database integration for several commercial and : non-commercial database management systems, so writing a : database-enabled webpage with PHP is fairly simple. The most common : use of PHP coding is probably as a replacement for CGI scripts. : : The php package contains the module which adds support for the PHP : language to Apache HTTP Server.

    Read the article

  • How to write a blog for SEO purpose

    - by Mathieu Imbert
    I have a photo sharing website, which provides very little textual content. Users can add tags to photos and a description, but it creates a lot of duplicate content, because most of the descriptions will be 'wow', 'lol', ... I don't think I should rely on users to build my SEO. I think it would be a great idea to write a blog, and use it to describe the best photos, start contests, explain themes, in short: create original content that search engines will love. Our website's main URL is like www.domain.com, and our new blog is hosted on blog.domain.com. From a SEO perspective, is it a good idea to keep the blog separate from the main site? This has the advantage to leave the original site unchanged, but will it add any page rank to the www.domain.com? If the blog ranks well it will obviously pass some page rank to the original through links. What do you think is the best option from a SEO perspective? Include the blog in www.domain.com? Or leave it in blog.domain.com?

    Read the article

  • firefox addons and their silly news tabs

    - by jettero
    Something like 30% of the addons I have in firefox update every other week and feel the need to pop open a tab about how awesome they are and all the cool things they changed. I just don't care at all and I'm very annoyed by these news tabs. When firefox opens, I want to see my home page. I've been looking for an addon to disable or kill them before I even have to look at them. Rather like addblock-for-addons. Short of finding a plugin that disables them, I'm seeking information about common interfaces so I can try to figure it out on my own. I'm wondering if I could do it in greasemonkey somehow. For example, is there something common about the url for the tabs?

    Read the article

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • Why was my site rejected for Google Adsense?

    - by hyuun jjang
    I have a 3 year old blog and its got around 16 articles/tutorials about some programming problems and solutions. It's getting pretty much a lot of view lately so I decided to apply for a google adsense account. When I first applied via blogger, google replied with the following statement: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: - Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). - You must provide accurate personal information with your application that matches the information on your domain registration. - Your website must contain substantial, original content... So, as I understood it, I decide to buy a domain and point my blogger blog to that new naked domain. and here is the newly bought domain where all the contents of my old blog resides. http://icodeya.com/ I reapplied, hoping that this time, I will make the cut. But then I got this reply Further detail: Unable to review your site: While reviewing http://www.icodeya.com/, we found that your site was down or unavailable. We suggest you check whether there was a typo in the URL submitted. When your site is operational, you can resubmit your application with the correct site by following the directions below. I'm a bit disappointed. Maybe I did something wrong with DNS configuration or something. But you can clearly see that my site is fully functional. I heard that google sends robots to crawl on to the site etc. It's just sad because I invested on a domain name, and now I can't even find ways to earn from it. Any tips?

    Read the article

  • Collabnet Subversion and Self Signed Certificates

    - by Robert May
    We installed Collabnet as our subversion server recently.  This is the first time that we’ve used it.  In general, it seems pretty good, but we ran into a problem with it.  People were getting the following error in Tortoise: OPTIONS of ’https://xxxx.xxxxxxxx.xxxx/svn/xxxxx’: SSL handshake failed: SSL error code – 1/1/336032856 (https://xxxx.xxxxxxxx.xxxx) The odd thing is that for some people, it worked, for others, it didn’t!  I also couldn’t find anything useful out on the internet. We had checked the Subversion Server should serve via https option in the settings, and all of the ports were open, etc. This option causes a self signed certificate to be used. What we discovered: Tortoise must use the same url as is in the Hostname field on the General settings for collabnet or you’ll get this error.  Basically, some people were using https://svn.xxxxxxx.xxxxx and others were using https://computername.xxxxxxxx.xxxx.  Because the host name said used the computer name version, the whole thing broke.  By changing the host name to the svn version, which is what they should be using, the problem went away.  The users do get the “Accept Certificate” prompt, but we can live with that! Technorati Tags: Subversion,Collabnet

    Read the article

  • JSCompress fails to compress my js file - why?

    - by Renso
    Issue: You use the online compression utility jscompress.com to compress your js file but it fails with an error. Why this may be happening and how to fix it. Possible causes: Apparently not using open and closing curly brackets in an IF statement would cause this. Well turns out this is not the case. Look at the following example and see if you can figure out what the issue is :-)   function SetupDeliveredVPRecontactNotes($item, id) {     var theData;     $.ajax({         data: { deliveredVPId: id },         url: $('#ajaxGetDeliveredVPRecontactNotesUrl').val(),         type: "GET",         async: false,         dataType: "html",         success: function(data, result) {             $item.empty();             var input = '<textarea class="recontactNote" rows="4" name="DeliveredVPRecontactNotes_' + id + '" id="DeliveredVPRecontactNotes_' + id + '" cols="115">' + data + '</textarea>';             $item.append(input);             theData = data;         },         error: function(XMLHttpRequest, textStatus, errorThrown) {             $item.empty();             alert("An error occurred: The operation to retrieve the DeliveredVP's Recontact Notes has failed");         }     });                  //ajax     return theData; }     Solution: The name of the method/function is the same as the message in the ALERT message when the spaces are removed: " DeliveredVP Recontact Notes" becomes " DeliveredVPRecontactNotes" and mathes that of the function. So I changed it to " DeliveredVP's Recontact Notes"

    Read the article

  • .htaccess folder rewrite

    - by Lisa
    I have 3 URLS all pointing to the same site. www.abc.co.uk www.xyz.com www.123abc.org I have a folder /foo/bar which has lots of sub folders and files in etc. I want to rewrite this to /bar. So if I have www.abc.co.uk/foo/bar/sheep/page.html I want it to redirect to www.abc.co.uk/bar/sheep/page.html. Is this possible. Sometime I may have a URL like www.abc.co.uk/foo/bar/foo/page.html so this would become www.abc.co.uk/bar/foo/page.html. Only the first instance of foo would be rewritten.

    Read the article

  • Install-SPSolution : This solution contains no resources scoped for a Web appli cation and cannot be deployed to a particular Web application

    - by Josh
    I have a PowerShell script that deploys about 12 web parts. They have all been created through Visual Studio 2010 and are being deployed to SharePoint 2010. I am getting the following error when running Install-SPSolution for one of my web parts: Install-SPSolution : This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. Can someone help me debug this? Every other Install-SPSolution command uses -AllWebApplications, and I do not want to specify the web application directly using -URL. Here is the command that is breaking (this is the same command used to successfully deploy all 11 other web parts): Install-SPSolution –Identity PortalSelector.wsp -AllWebApplications -GACDeployment

    Read the article

  • nginx howto correct the path from a back-end server redirect response under a virtual directory

    - by noname
    The following was my deployed servers: client ------ nginx proxy(example.com) ------ back-end server(192.168.1.20) The nginx proxy's external URL was configured under a virtual directory http://example.com/demo/ The back-end server was configure to http://192.168.1.20:8080/ the following was part of the nginx configure file: location /demo { proxy_pass http://192.168.1.20:8080/; proxy_redirect default; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When the back-end server send a redirect response (HTTP CODE 302) with LOCATION head field "http://192.168.1.20/subdir/", the nginx map this LOCATION header field to "http://example.com/subdir/", not the disired "http://example.com/demo/subdir/"

    Read the article

  • SEO title tag and earning a high rank on search engines [closed]

    - by Josh White
    Possible Duplicate: What are the best ways to increase your site's position in Google? One of the most basic SEO techiniques is including accurate description below 64 characters in the tags of each page. I was wondering if is considered ethical SEO to set up the contents based on a search keyword for example. So if the user searches for 'apples pictures' for example, then the title of the webpage would be 'apple pictures'. Note that the search keywords accurately describe my website contents because the title will always relate to the body of the webpage and 85-90% of the terms searched for will return corresponding results. Is this considered a good seo practice and is it ethical? Also, can someone explain what the idea is behind "linking"? I read somewhere that it is a good seo practice to link other websites and it is good when other websites link you. Does this mean that I should include as many links to other websites as possible (that are somehow relevant to my websites goal), also if I joined forums/services and posted my website url in the signature, would that still be considered other websites linking me?

    Read the article

  • Curl authentication

    - by Jack Humphries
    I am trying to download a file with cURL from a password protected directory on my site. It is not working. Instead of the downloading the requested file, it downloads a HTML file that says, "Authentication Required!" I'm not sure what the problem is. I've tried both of these (with the same result). The username and password are correct (and if the link below is used in a web browser, the file downloads successfully). 1) The username and password are included as part of the URL. curl https://username:[email protected]/auth/file.dmg --O /file.dmg; 2) The username and password are included as an option. curl -u username:wordpass.1 https://www.example.com/auth/file.dmg --O /file.dmg;

    Read the article

  • FreeBSD Server .htaccess issues

    - by Will Ayers
    Server Details: FreeBSD PHP Version 4.3.11 Apache Appache Modules: mod_throttle, mod_php4, mod_speedycgi, mod_ssl, mod_setenvif, mod_so, mod_unique_id, mod_headers, mod_expires, mod_auth_db, mod_auth_anon, mod_auth, mod_access, mod_rewrite, mod_alias, mod_actions, mod_cgi, mod_dir, mod_autoindex, mod_include, mod_info, mod_status, mod_negotiation, mod_mime, mod_mime_magic, mod_log_config, mod_define, mod_env, mod_vhost_alias, mod_mmap_static, http_core The issue I am having is when ever I write any kind of code in the .htaccess file, it throws a 500 Internal error I am simply trying to rewrite url's and am using the exact code that wordpress creates for me and even tried custom code used before on previous servers and it still does not work. WordPress created code: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /lobster-tail-blog/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /lobster-tail-blog/index.php [L] </IfModule> # END WordPress And even a simple thing like this throws the error: <IfModule mod_rewrite.c> RewriteEngine On </IfModule> Anyone know of any fixes or why this is causing this error? I have the mod_rewrite module loaded

    Read the article

  • How to determine most stable Cisco IOS release?

    - by Chris J
    This post is about a Catalyst 4948E switch. I was looking on the download page and realized that there are no "GD" versions available. Are the "ED" versions stable? Even if you change "ED" to "GD" in the URL the IOS images are still the same. http://www.cisco.com/cisco/software/release.html?mdfid=283027810&flowid=3592&softwareid=280805680&release=15.1.1-SG2&relind=AVAILABLE&rellifecycle=ED&reltype=latest Is 15.1 as reliable as 15.0? My devices are currently on the 12.2 train. Is there anything special to upgrade to one of the 15.x trains? Are the configurations compatible.

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

  • Installation problems with Gimp 2.8 on Kubuntu 12

    - by Martyn
    I've just upgraded to Kubuntu 12.04 and having problems installing Gimp 2.8, I was wondering if anyone can help me? I've followed these instructions: sudo add-apt-repository ppa:otto-kesselgulasch/gimp sudo apt-get update sudo apt-get install gimp but get this error: The following packages have unmet dependencies. gimp : Depends: libwebkitgtk-1.0-0 (>= 1.3.10) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I've tried running these and trying again with the same problems: sudo apt-get clean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install also running sudo apt-get install libwebkitgtk-1.0-0 gives me this error The following packages have unmet dependencies. libwebkitgtk-1.0-0 : Depends: libgail18 (>= 1.18.0) but it is not going to be installed E: Unable to correct problems, you have held broken packages. and then running sudo apt-get install libgail18 gives me this error The following packages have unmet dependencies. libgail18 : Depends: libgtk2.0-0 (= 2.24.10-0ubuntu6) but 2.24.10-1oneiric6~ppa is to be installed E: Unable to correct problems, you have held broken packages. The bit that caught my attention was but 2.24.10-1oneiric6~ppa is to be installed - but I don't know what to do with this. I've rebooted and the error messages are the same. Can anyone help? ** EDIT ** I've found someone with the same problem, unfortunately the link is in German so I can't completely understand what the solution (last post) is - here's the google translated link : http://translate.google.com/translate?sl=auto&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&u=http%3A%2F%2Fwebcache.googleusercontent.com%2Fsearch%3Fq%3Dcache%3A1U2Uat6XqUsJ%3Aforum.ubuntuusers.de%2Ftopic%2Fprobleme-nach-update-fehlerhafte-pakete-aus-on%2F%2B%26cd%3D3%26hl%3Den%26ct%3Dclnk%26gl%3Duk&act=url

    Read the article

  • The application attempted to perform an operation not allowed by the security policy

    - by user16521
    I ran this command on the server that has the share of code that my local IIS site set to (Via UNC to that share): http://support.microsoft.com/kb/320268 Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on (obviously I replaced Drive with C, and the actual computername and sharename with the one I'm sharing out). But when I run the ASP.NET site, I am still getting this runtime exception: Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

    Read the article

  • Dynamic libraries are not allowed on iOS but what about this?

    - by tapirath
    I'm currently using LuaJIT and its FFI interface to call C functions from LUA scripts. What FFI does is to look at dynamic libraries' exported symbols and let the developer use it directly form LUA. Kind of like Python ctypes. Obviously using dynamic libraries is not permitted in iOS for security reasons. So in order to come up with a solution I found the following snippet. /* (c) 2012 +++ Filip Stoklas, aka FipS, http://www.4FipS.com +++ THIS CODE IS FREE - LICENSED UNDER THE MIT LICENSE ARTICLE URL: http://forums.4fips.com/viewtopic.php?f=3&t=589 */ extern "C" { #include <lua.h> #include <lualib.h> #include <lauxlib.h> } // extern "C" #include <cassert> // Please note that despite the fact that we build this code as a regular // executable (exe), we still use __declspec(dllexport) to export // symbols. Without doing that FFI wouldn't be able to locate them! extern "C" __declspec(dllexport) void __cdecl hello_from_lua(const char *msg) { printf("A message from LUA: %s\n", msg); } const char *lua_code = "local ffi = require('ffi') \n" "ffi.cdef[[ \n" "const char * hello_from_lua(const char *); \n" // matches the C prototype "]] \n" "ffi.C.hello_from_lua('Hello from LUA!') \n" // do actual C call ; int main() { lua_State *lua = luaL_newstate(); assert(lua); luaL_openlibs(lua); const int status = luaL_dostring(lua, lua_code); if(status) printf("Couldn't execute LUA code: %s\n", lua_tostring(lua, -1)); lua_close(lua); return 0; } // output: // A message from LUA: Hello from LUA! Basically, instead of using a dynamic library, the symbols are exported directly inside the executable file. The question is: is this permitted by Apple? Thanks.

    Read the article

  • Streaming video file to iPhone

    - by user34157
    I have a http streaming link which gives me .flv streaming feed. I want to convert that and access in my iPhone program. How can i do that? I want to have a desktop software like VLC and input this streaming feed URL and convert to iPhone supported and stream again to iPhone. I tried VLC with H.264 and Mpeg-1 audio, but seems to be it doesn't give the supported format, so as iPhone program doesn't play the video. Could someone please guide me how can i setup a desktop software which can stream iPhone supported file?

    Read the article

  • Questions about Domains and DNS

    - by ShoX
    Hi, I am totally new to the DNS and server hosting world and not quite sure what I need. I want to get a domain, forward it to my own server, so that the user sees example.com in the url bar and example.com/foo/bar will work. Depending on what subdomain it is, it should do different things (another base-directory at webserver, ftp, etc). Also my email should be able to be sent to and received by that server. What irritates me, is the fact, that in the A-record I can only list IP-addresses and no ports. So do I have to set up a nameserver on my own server? Or do I accomplish this via vhosts on my webserver? I would appreciate any help or link to a tutorial. I know how DNS works, know some basic apache-stuff, etc... so no need to explain that. Thanks

    Read the article

  • mplayer dumpstream sometimes fails

    - by User1
    I'm trying to rip the video at http://videolectures.net/ecml07%5Fgetoor%5Fisr/, so I can play it at a faster speed. I paste http://193.2.4.216/2007/pascal/ecml07%5Fwarsaw/getoor%5Flise/ecml07%5Fgetoor%5Fisr%5F01.wmv into a firefox browser in Windows and MediaPlayer plays the thing. However if I try mplayer -dumpstream, it gets stuck into an infinite loop trying to play the file. If I use wget to download the link, I get a small text file which basically points to the same URL. How can I get mplayer to download this stream?

    Read the article

  • Reg Expression htaccess RewriteRule

    - by Rick
    I am new to using regular expressions for rewriting URL's in htaccess I need to redirect mysite.com/123 to mysite.com/, IF cookie named 'ref' is set. my current htaccess is: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_COOKIE} ref=true [NC] RewriteRule ^/([0-9]+)/$ http://www.mysite.com </IfModule> The goal is that when someone enters site with: mysite.com/111(some number) that they are redirected to the home page of the site after the cookie is set. Be nice... I'm new! ;o)

    Read the article

  • One site being on a subdirectory of another. Does google count this against you?

    - by Mick
    I have created two similar websites (relating to monetary systems). So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for the successful one, but less liked one is just an "add-on" which sits on a subdirectory of the successful one. I wonder if Google somehow detects this and treats it as a less significant website? EDIT: Just to clarify, even though one site is an add-on that sits on a subdirectory of the other, the URL is arranged to look like it is a root. I.e. the unpopular site can be accessed directly with a simple www.myunpopularsite.com name, without specifying any subdirectory. EDIT: Just in case its important... say the popular site is called pop.com and the unpopular one unpop.com. In the webspace I've purchased, there is a directory called public_html. This is where I put the index.htm and all the other files of my popular site. When I purchased the add-on unpop.com. I made a subdirectory of public_html called unpop. It is within this "public_html\unpop\" that I place the index.htm and all the other files of my unpopular site. Typing www.unpop.com into the address bar of a browser links directly to the contents of "public_html\unpop\" and the user is not aware that this site is sitting on a subdirectory of another site. BUT if you type "www.pop.com/unpop" into the address bar of a browser you DO see the unpopular site.

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >