Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 624/1810 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • SQL restore from single file db to filegroup

    - by Mauro
    I have a 180GB MOSS 2007 database whose maintenance (i.e. backups and restores) are becoming a problem. Part of the issue can be resolved by splitting the three content sites down into their own site collections, however this will likely still leave me with a 100gb DB to deal with. Whilst this isnt entirely problematic for SQL it does mean that backups / restores take far too long. my idea is to split each of the databases to 30gb files, then to import the content into them which should distribute the content across the file groups,making it much easier / faster to backup/restore. Is there a way to backup from a single file and restore to a filegroup? If i have the wrong understanding of filegroups then I'm more than happy to find out other methods of managing the size of databases.

    Read the article

  • How to rotate html5 canvas as page background?

    - by Sebastian P.R. Gingter
    Hi, I want to achieve the following: Image a white sheet of paper on a black desk. Then rotate the paper a little bit to the left (like, 25 degrees). Now you still have the black desk, and a rotated white box on it. In this rotated white box I want to place non-rotated normal html content like text, tables, div's etc. I already have a problem at the very first step: rotating a rectangle. This is my code so far: <html> <head> <script> function draw() { var canvas=document.getElementById("myCanvas"); var c=canvas.getContext("2d"); c.fillStyle = '#00'; c.fillRect(100, 100, 100, 100); c.rotate(20); c.fillStyle = '#ff0000'; c.fillRect(150, 150, 10, 10); } </script> </head> <body onload="draw()"> <canvas id="myCanvas" width="500" height="500"></canvas> </body> </html> With this, I see only a normal black box. Nothing else. I assume there should be a red, rotated box too, but there's nothing. What is the best approach to reach this and to have it as a (scaling) background for my web page?

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • Exim redirect all unexisting accounts for local domains to a specific account

    - by tntu
    I want to route all incoming emails for local domains only to a single account if an account is not setup for that user. I would also like each email to be written in it's own file in user folder. I have a catchall user with /home/catchall/ path where I have a mail folder made for this but so far emails wither fail to deliver (thus my rule did not work) or they do deliver to /etc/mail/catchall file. I have been trying to put something together from the Exim configuration but so far nothing seem to work. http://exim.org/exim-html-current/doc/html/spec_html/ch20.html

    Read the article

  • Saving Dragged Dropped items position on postback in asp.net [closed]

    - by Deeptechtons
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • First requests are painfully slow

    - by winSharp93
    I am running Redmine under IIS using Zoo. Installation was done using the Web Platform Installer and the default configuration has not been touched. However, when using the application, the first requests take very long to complete (sometimes more than one minute). During that time, the ruby.exe causes some CPU load (about 15%). According to the log files, it's mainly the views taking that long to render: Started GET "/redmine/login" for IP at 2012-09-04 09:54:08 +0200 Processing by AccountController#login as HTML Rendered account/login.html.erb within layouts/base (42150.5ms) Completed 200 OK in 43508ms (Views: 43008.5ms | ActiveRecord: 0.0ms) Rendered account/login.html.erb within layouts/base (42435.1ms) Completed 200 OK in 44100ms (Views: 43523.3ms | ActiveRecord: 0.0ms) After the initial delay, further request times are totally acceptable. Any ideas on how to speed up the warmup time?

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point. To search in only the answers from this question, use the inquestion:this option.

    Read the article

  • Recover a deleted webpage

    - by rc
    Suppose, a blog or a nice article was hosted on a website and it got deleted or worse the website was brought down. How do you view that web page? I tried searching for the cached version in Google. But, looks like the content was deleted long ago and is not listed in the search results directly. There are annotations to the link from many other sites, but still the actual content is not fully available. Now, can anybody help me see this page... I am actually looking for http://effectize.com/become-coolest-programmer :) And, moreover, in addition to bookmarking a favorite link, is it possible to cache the content of the link as well for later reference in case it gets deleted? EDIT: Looks like a URL can be cached for future reference. Try: http://backupurl.com/

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • Can't get MultiViews to work on Apache 2.2 - negotiation problem

    - by Doe
    Hi I can't get MultiViews to set up properly on my Apache 2.2. When I go to filtered.com/something, I expect it to execute something.pl but it doesn't. I get a Error 404 page. In my error logs it says: " [Fri Apr 16 13:04:20 2010] [error] [client 78.85.152.94] Negotiation: discovere\ d file(s) matching request: /var/www/html/filtered.net/translate-english (None could be negotiated)., referer: http://filtered.net/" Would anyone kindly help me so that MultiViews is properly installed on my server? ServerAdmin [email protected] ServerAlias *.filtered.net DocumentRoot /var/www/html/filtered.net ServerName filtered.net ErrorLog logs/filtered.net-error_log CustomLog logs/filtered.net-access_log common Options ExecCGI +Indexes +IncludesNoExec +MultiViews +ExecCGI AllowOverride None Order allow,deny Allow from all <IfModule mod_dir.c> DirectoryIndex index.php index.html index.pl </IfModule> </Directory> </VirtualHost>

    Read the article

  • I want to be a programmer, work in corporate environment, earn well, learn fast and eventually become a great programmer [on hold]

    - by Shin San
    I'll try to keep this simple: I'm 29, been dabbling with computers for the past 10 years, had entry level jobs in tech support for different apps, been fixing computers for a while and now want to specialize in something. I'm not 100% stranger to programming but haven't gone past if/then/else with anything. A bit of JavaScript, PHP, Python and currently checking out the "SELECT" statement in SQL :)) I'm curious about programming, I enjoy it and I'm thinking of making a living out of it. So, while I'm at it, why not earn a bit more than the average Joe? So, that's why I'm checking what the best solution, the best learning path and the most useful languages are considering: a) how easy/fast can you find a job by knowing it b) how much would I be able to earn c) how fast can I learn it By reading 10-20 articles online I've come up with an example, but I'm here for some expert advice. Example: * ratings from a) and b) point of view #1 sql ; #2 java ; #3 html (please don't start the markup language debate) ; #4 javascript From this ratings, I'd say a good way to go is learn html/css/(javascript or php) for the web part of apps, some SQL/MySQL/whateverSQL for holding data and loads of Java for the program itself. Please let me know if this is a good idea and if so, what should be the order for learning all of the above. Else, please let me know a better way and why it would be better. Many thanks for taking the time to read my question. Best wishes to you guys Edit: if I think Java + SQL + HTML&JavaScript is the way to go, does the order I'm learning them in matter? Or can I try to learn them all at once?

    Read the article

  • Java Embedded @ JavaOne Call for Papers

    - by arungupta
    Do you care about Internet of Things ? Interested in sharing your experience at javaOne about how you are using Java Embedded Technology to realize this vision ? At Java Embedded @ JavaOne, C-level executives, architects, business leaders, and decision makers from around the globe will come together to learn how Java Embedded technologies and solutions offer compelling value and a clear path forward to business efficiency and agility. The conference will feature dedicated business-focused content from Oracle discussing how Java Embedded delivers a secure, optimized environment ideal for multiple network-based devices, as well as meaningful industry-focused sessions from peers who are already successfully utilizing Java Embedded. Submit your papers for Business Track or Technical Content related to Embedded Java to be presented at JavaOne here. Speakers for accepted sessions will receive a complimentary pass to the event for which their session is submitted. Note, the CFP for the main JavaOne conference is over, speakers notified, and content catalog published. This is CFP only for Java Embedded @ JavaOne. Some key dates are: Jul 8th: Call for Papers closes Week of Jul 29th: Notifications sent Conference Dates: Oct 3, 4, 2012 And the main conference website is oracle.com/javaone/embedded.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Creating IIS Rewrite Rules

    - by Tom Bell
    I'm having a hard time converting old .htaccess rewrite rules to new IIS ones so I was wondering if anyone could point me in the right direction. Below are some example URLs I would like rewriting. http://example.org.uk/about/ Rewrites to http://example.org.uk/about/about.html ----------- http://example.org.uk/blog/events/ Rewrites to http://example.org.uk/blog/events.html ----------- http://example.org.uk/blog/2010/11/foo-bar Rewrites to http://example.org.uk/blog/2010/11/foo-bar.html The directories and file names are generic and could be anything. Any help would be greatly appreciated.

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • RewriteRule Works With "Match Everything" Pattern But Not Directory Pattern

    - by kgrote
    I'm trying to redirect newsletter URLs from my local server to an Amazon S3 bucket. So I want to redirect from: https://mysite.com/assets/img/newsletter/Jan12_Newsletter.html to: https://s3.amazonaws.com/mybucket/newsletters/legacy/Jan12_Newsletter.html Here's the first part of my rule: RewriteEngine On RewriteBase / # Is it in the newsletters directory RewriteCond %{REQUEST_URI} ^(/assets/img/newsletter/)(.+) [NC] # Is not a 2008-2011 newsletter RewriteCond %{REQUEST_URI} !(.+)(11|10|09|08)_Newsletter.html$ [NC] ## -> RewriteRule to S3 Here <- ## If I use this RewriteRule to point to the new subdirectory on S3 it will NOT redirect: RewriteRule ^(/assets/img/newsletter/)(.+) https://s3.amazonaws.com/mybucket/newsletters/legacy/$2 [R=301,L] However if I use a blanket expression to capture the entire file path it WILL redirect: RewriteRule ^(.*)$ https://s3.amazonaws.com/mybucket/newsletters/legacy/$1 [R=301,L] Why does it only work with a "match everything" expression but not a more specific expression?

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

  • is there a way to automate changing filenames in <link> , <script> tags

    - by nepsdotin
    when we use Expires header for text files like js, css, contents are cached in the browser, to get new content we need to change in the html file the new names in the link and script tag. When we add changes. How can we automate it. I may have some bunch of html files in multiple folders also in subdirectories. There would be a text file filelist.txt OldName NewName oldfile1-ver-1.0.js oldfile1-ver-2.0.js oldfile2-ver-1.0.js oldfile2-ver-2.0.js oldfile3-ver-1.0.js oldfile3-ver-2.0.js oldfile4-ver-1.0.js oldfile4-ver-2.0.js The script should change all the oldfile1-ver-1.0.js into oldfile1-ver-2.0.js in the html, php files I would run this script before i start uploading. Finally the script could create a list of files and line number where it made the update. The solution can be in PERL/PHP/BATCH or anything thats nice and elegant

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • Set nginx.conf to deny all connections except to certain files or directories

    - by Ben
    I am trying to set up Nginx so that all connections to my numeric ip are denied, with the exception of a few arbitrary directories and files. So if someone goes to my IP, they are allowed to access the index.php file, and the phpmyadmin directory for example, but should they try to access any other directories, they will be denied. This is my server block from nginx.conf: server { listen 80; server_name localhost; location / { root html; index index.html index.htm index.php; } location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/http/nginx/$fastcgi_script_name; include fastcgi_params; } } How would I proceed? Thanks very much!

    Read the article

  • Meta refresh tag not working in (my) firefox?

    - by mplungjan
    Code like on this page does not work in (my) Firefox 3.6 and also not in Fx4 (WinXPsp3) Works in IE8, Safari 5, Opera 11, Mozilla 1.7, Chrome 9 <meta http-equiv=refresh content="12; URL=meta2.htm"> <meta http-equiv="refresh" content="1; URL=http://fully_qualified_url.com/page2.html"> are completely ignored Not that I use such back-button killing things, but a LOT of sites do, possibly including my linux apache it seems when it wants to show a 503 error page... If I firebug or look at generated content, I do not see the refresh tag changed in any way so I am really curious what kind of plugin/addon could block me which is why I googled (in vain) for a known bug... In about:config I have accessibility.blockautorefresh; false so that is not it. I ran in safe mode and OH MY GOD, STACKEXCHANGE IS FULL OF ADS but no redirect

    Read the article

  • Server refusing access from every host except itself

    - by mezamashiman
    I have media content on a hosted server that I want to be accessed by another domain. In the configuration file, even if I "Allow from all," all hosts except itself will fetch the hosting company's generic landing page, which puzzles me. I test it with curl, with the command: curl -H "Host: anything.com" http://mydomain.com and it just shows the hosting company's page. If I do: curl -H "Host: mydomain.com" http://mydomain.com it will show my content. How do I allow other hosts to access my content? I thought it would work with "Allow" in .htaccess, but it doesn't.

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >