Search Results

Search found 56144 results on 2246 pages for 'web search'.

Page 210/2246 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Looking for reading material on application architecture with web UI

    - by toong
    I'm looking for articles (or other reading material) on the topic of fat client applications with a web UI layer. Open-source projects that use this architecture would be very interesting too. Such an application would embed one (or more) browser-window(s) (chromiumembedded for example). You would need bidirectional communication between your web-UI and your domain model/services. I think this allows quick prototyping the UI, a clean separation between logic and UI and potentially easier portability across platforms (compared to WinForms for example). But that is just my view, I was looking for the view of people who have been on that road. An example of an application using a web-ui layer is Light Table. Unfortunately it is not open source (at this point?).

    Read the article

  • How do I encrypt the source code on the webserver?

    - by Ashin k n
    I have a web application developed using Python, HTML, CSS & JavaScript. The customer installs it in any of their own Machine and uses it through their LAN. In short the customer sets up the webserver in any of their own machine. Since its a web application, all the source code is open for the customer in the document root directory of webserver. I want to encrypt the whole source code in the document root directory in such a way that it should not effect the working of the web application. Is there is any way to encrypt the Python, HTML, CSS & JavaScript for this purpose.

    Read the article

  • Why Facebook profiles are Google-searchable?

    - by Jose
    Facebook has around 1B user profiles. They can be found by searching in Google. However, I don't think these profiles are linked from anywhere, so how could Google discover them? As far as I know, sitemaps are not enough for that (http://webmasters.stackexchange.com/a/5151), as all URLs should be crawlable anyway. I ask the question as I also have a site with user profiles and would like to make them discoverable.

    Read the article

  • Does Google treat AWS IP addresses as related?

    - by ElHaix
    We are hosting several websites on one of our servers, and wondering if because they are on the same subnet that they have been somehow penalized. We are not inter-linking between websites. However in an attempt to have everything hosted in AWS, we will have some sites that we do want to be interlinked. If the sites resided on the same subnet, this could be bad. However, with AWS, we can allocate multiple elastic IP addresses that do reside on different subnets. How does Google deal with this?

    Read the article

  • Narrowing down my large keyword list for new PPC campaign

    - by gijoemike
    If I have a list of 100 keywords that are candidates for a PPC campaign (my list is actually 1000+). What is the best approach to narrowing this down to the top 5-10 keywords I should start with? I'm also wondering if my top chosen keywords for PPC campaign should be my main keywords for SEO site optimization for organic traffic. I also have another question on this site asking: How does one estimate where a competitor is getting most of their traffic from? Thanks. The website isn't created yet, but will be up in January.

    Read the article

  • How does the GPL work in regards to languages like Dart which compile to other languages?

    - by Peter-W
    Google's Dart language is not supported by any Web Browsers other than a special build of Chromium known as Dartium. To use Dart for production code you need to run it through a Dart-JavaScript compiler/translator and then use the outputted JavaScript in your web application. Because JavaScript is an interpreted language everyone who receives the "binary"(Aka, the .js file) has also received the source code. Now, the GNU General Public License v3.0 states that: "The “source code” for a work means the preferred form of the work for making modifications to it." Which would imply that the original Dart code in addition to the JavaScript code must also be provided to the end user. Does this mean that any web applications written in Dart must also provide the original Dart code to all visitors of their website even though a copy of the source code has already been provided in a human readable/writable/modifiable form?

    Read the article

  • My blog not even ranking for exact title match [on hold]

    - by Akshay Hallur
    I have original in detail blog posts related to blogging and SEO. This domain has been dropped (expired) 2 times before my acquisition. I am the 3rd owner of the domain since 143 days. Blog posts are not ranking even for exact titles. Google+ or LinkedIn shares will show up instead of my content.Some blog posts are not even indexed. I am hardly getting around 7 organic visits / day. Example 1 : http://www.infoflame.com/offer-pdf-of-blog-posts-for-likes-and-shares/    Title: Offer Readers PDF of Blog Posts for Their Likes and Shares not indexed at all.  Example 2 : http://www.infoflame.com/anchor-text-for-seo/    is indexed but not coming up for the exact title. Suspect: Dropped domain, less likely used for spam( WayBack machine (2 drops) 3 captures since 2004, I don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What's the reason for this? Should I wait? How can I tell Google that ownership is changed and the domain is now spam-free? or should I de-index it and start a new blog? Thank you, for any advises.

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • What are the most common AI systems implemented in Tower Defense Games

    - by the_Dan
    I'm currently in the middle of researching on the various types of AI techniques used in tower defense type games. If someone could be help me in understanding the different types of techniques and their associated advantages. Using Google I already found several techniques. Random Map traversal Path finding e.g. Cost based Traversing Algorithms i.e. A* I have already found a great answer to this type of question with the below link, but I feel that this answer is tailored to FPS. If anyone could add to this and make it specific to tower defense games then I would be truly great-full. How is AI most commonly implemented in popular games? Example of such games would be: Radiant Defense Plant Vs Zombies - Not truly Intelligent, but there must be an AI system used right? Field Runners Edit: After further research I found an interesting book that may be useful: http://www.amazon.com/dp/0123747317/?tag=stackoverfl08-20

    Read the article

  • Robots.txt Disallow command [on hold]

    - by Saahil Sinha
    How to disallow folders through Robots.txt, which are been crawled due to wrong url structure, which thus cause duplicate page error The URL been crawled as incorrectly by Google leading to duplicate page error: www.abc.com/forum/index.php?option=com_forum However, The actual correct pages however are: www.abc.com/index.php?option=com_forum Is this a correct way by excluding them through robots.txt: To exclude www.abc.com/forum/index.php?option=com_forum Below is command Disallow: /forum/ Will it not block in legitimate component folder 'Forum' of site?

    Read the article

  • How to remove old robots.txt from google as old file block the whole site

    - by KnowledgeSeeker
    I have a website which still shows old robots.txt in the google webmaster tools. User-agent: * Disallow: / Which is blocking Googlebot. I have removed old file updated new robots.txt file with almost full access & uploaded it yesterday but it is still showing me the old version of robots.txt Latest updated copy contents are below User-agent: * Disallow: /flipbook/ Disallow: /SliderImage/ Disallow: /UserControls/ Disallow: /Scripts/ Disallow: /PDF/ Disallow: /dropdown/ I submitted request to remove this file using Google webmaster tools but my request was denied I would appreciate if someone can tell me how i can clear it from the google cache and make google read the latest version of robots.txt file.

    Read the article

  • Should I ditch AJAX in client side web development when I've got a web-socket open?

    - by jt0dd
    I was thinking that maybe I should forget AJAX (HTTP) requests when I've got a web-socket open between client and server, but I decided I should ask here to check if this could be a bad practice for some reason that I'm not thinking of. Once the socket is open, there's less syntax (often meaning simpler error handling) involved in passing information between client and server with Socket.io (just one example of a web-socket). Is there some obvious situation where a web-socket (Socket.io for example) isn't going to be capable of handling a functionality that an AJAX request could do easily?

    Read the article

  • How to Avoid Duplicate Content in Wordpress Ecommerce Store

    - by Bhanuprakash Moturu
    hi i run a word press eCommerce store powered by woo commerce . i have a large inventory of products most of the product description is same for all products and its mandatory to include it. its creating a large duplicate content on site each category have 6 products i thought of a solution can you suggest which one is good 1 no index and follow product page and link it to categories page using canonical tag 2 index and nofollow product page and link it to categories page using canonical tag which is the best solution and is it a good practice to use canonical tag to link to categories page

    Read the article

  • How to register properly to the most famous SEOs? [closed]

    - by Olivier Pons
    I know it may have been asked many times, but here's my question: I'm about to open my website which I'm more than proud of (I'll talk about its capabilities on my blog). Anyway I want it to be registered by all the most famous SEOs and to be fetched often because it may grow up quickly. I know that a lot of people may have already asked this question but nevertheless I didn't find something relevant to that. I just want to know where I should register on all major SEOs when I release a website. Maybe this is a wiki, but I didn't find anything helpful on the subject. Any advice welcome.

    Read the article

  • Webmaster Tools: root and subdirectories?

    - by nick
    We have all our international sites on our .com domain like this: site.com/uk site.com/us etc... When creating the sites in Webmaster Tools I've created different sites and submitted sitemaps for each directory so that we can appropriately geotarget the site. Is it also recommended to add the root .com with its geotargeting set to international? If so should I also add all the seperate site maps (like the /us/sitemap.xml) even though they have been added to the directory level sites?

    Read the article

  • Creating Google sitemap.xml , is it okay for the images to be wrapped in url tags?

    - by AzizAG
    I'm using a tool to generate the sitemap.xml file for me, it started to crawl my website, got the pages and all images, but when exporting it, I review the xml(to make sure nothing is wrong) and I noticed that the images in my website are wrapped in url tags(I think it should be in image tags). See this: <url><loc>http://mywebsite.com/images/12.jpg</loc><lastmod>2012-05-23T13:39:02+00:00</lastmod><changefreq>weekly</changefreq><priority>0.50</priority></url> Shouldn't it be wrapped in image tag?(just like videos wrapped in video tag) Thanks.

    Read the article

  • How long does it take for Google to re-index pages or update the link titles?

    - by ElHaix
    On one of our classified sites, when doing site:[mysite.com] in Google, the link text is simply [product name] - [mysite.com], where as it should read [product name] classifieds for sale in... I suspect that the site map may have been submitted when we just had [product name], and updated the page titles later. However, it has been a couple of weeks that I have confirmed the longer page titles, and still they appear shortened in organic results. How can I get this looking right in Google's organic results?

    Read the article

  • Linear Search in Python? [closed]

    - by POTUS
    def find_interval(mesh,x): '''This function finds the interval containing x according to the following rules, mesh is an ordered list with n numbers return 0 if x < mesh[0] return n if mesh[n-1] < x return k if mesh[k-1] <= x < mesh[k] return n-1 if mesh[n-2] <= x <= mesh[n-1] This function does a Linear search. 08/29/2012 ''' for n in range(len(mesh)): for k in range(len(mesh)): if x == mesh[n]: print "Found x at index:" return n elif x<mesh[n]: return 0 elif mesh[n-1]<x: return n elif mesh[n-2]<=x<=mesh[n-1]: return n-1 elif mesh[k-1]<=x<mesh[k]: return k mesh = [0, 0.1, 0.25, 0.5, 0.6, 0.75, 0.9, 1] print mesh print find_interval(mesh, -1) print find_interval(mesh, 0) print find_interval(mesh, 0.1) print find_interval(mesh, 0.8) print find_interval(mesh, 0.9) print find_interval(mesh, 1) print find_interval(mesh, 1.01) Output: [0, 0.100000000000000, 0.250000000000000, 0.500000000000000, 0.600000000000000, 0.750000000000000, 0.900000000000000, 1] 0 Found x at index: 0 2 6 -1 -1 0 I don't think the output is correct. Can anyone help me fix it? Thanks.

    Read the article

  • How can I find files quicker than find or locate?

    - by Chaitanya
    I have been using find command to find files on my 1 tb hard disk. it takes very long. then I used locate which proved to be faster with regular update using updatedb. But the limitation of locate is that I cannot find files with certain size or modified/created time. can you suggest me any ideas on how to find files at more speed or in that case how to pipe output of locate command in a way that all other information like size, time, etc. can be displayed or redirected to a file.

    Read the article

  • Free ASP.NET, Ajax, and IIS Web Camps

    My colleague James Senior (@jsenior) has organized some new Microsoft's Web Camps. These are free, two-day events that allow you to learn and build on the Microsoft Web Platform. At camp, you will hear from Microsoft experts on the latest components of the platform, including ASP.NET Web Forms, ASP.NET MVC, ASP.NET Ajax Library, Entity Framework, IIS and much more. Camps also provide the opportunity to get hands on with labs and get creative by building apps in teams. All this with Microsoft...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Assign subdomains to separate ports on web server

    - by Michael Frank
    I have set up an Abyss web server as a little experiment, and I want to know if it is possible to assign subdomains to different ports on the machine the web server is running on. I have a couple webUIs that I'd like to assign subdomains: 192.168.1.1:8000 becomes example.com/webui1/ 192.168.1.1:8001 becomes example.com/webui2/ The webUIs are available by accessing their ports via example.com:8000. I have tried using a reverse proxy, but it seems that this is only usable on one internal IP at a time. What other options do I have? Answer is good, but my current set up doesn't meet the requirements. Abyss Web Server X2 is required to use Virtual Hosts with Abyss.

    Read the article

  • PHP and performance

    - by Naif
    I always hear that PHP is for medium and small websites whereas .NET and Java for enterprise applications. My question is about PHP. Why is PHP not a good option for enterprise web applications? Is it because if the web application becomes bigger then PHP will be slower as it is an interpreted language? I know that corporate world will choose .NET or J2EE because of the integration with their products and because of back end services, etc. However, if we just have PHP for building sites and web applications then how can we use it to perform well with big sites? In short, Is there a relationship between the performance of PHP and the size of the website? What are the factors that make PHP not appropriate option for big sites?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • Passing Parameters Between Web-Services and JSF Pages

    - by shay.shmeltzer
    This is another quick demo that shows a common scenario that combines several demos I did in the past. The scenario – we have two web services, one returns a list of objects, the other allows us to update an object. We want to build a page flow where the first page shows us the list of objects, allows us to select one, and then we can edit that instance in the next page and call the second web service to update our data source. The demo shows: How to select a row and save the object value in a pageFlowScope. (using setPropertyListener). How to create a page that allows me to modify the value of the pageFlowScope object, and how to pass the object as a parameter to the second Web service. Check it out here:

    Read the article

  • Blog not even ranking for exact title match, after domain has been dropped twice [on hold]

    - by Akshay Hallur
    Consider a blog, related to blogging and SEO. The domain has been dropped (expired) 2 times before acquisition. The current owner is the 3rd owner of the domain since 5 months. Blog posts are not ranking, even for exact titles. Google+ or other shares will show up instead of the content. Some blog posts are not even indexed. Let us TAKE that it gets around 7 organic visits / day. Dropped domain, less likely used for spam (WayBack machine (2 Reframed drops) 3 captures since 2004, Don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What could be the reason for this? How can Google be told that ownership is changed and the domain is now spam-free? Would this domain be salvageble, or does this only change after relocating to another domain?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >