Search Results

Search found 9728 results on 390 pages for 'zee pro'.

Page 151/390 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • How to parse JSON data from web more faster [closed]

    - by Kaidul Islam Sazal
    I have json inventory inventory.json on the server like this: [ { "body" : "SUV", "color" : { "ext" : "White diamond pearl", "int" : "Taupe" }, "id" : "276181", "make" : "Acura", "miles" : 35949, "model" : "RDX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JNBD/001_0292.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.3L DOHC PGM-FI 16-VALVE", "trans" : "Automatic" }, "price" : { "net" : 29488 }, "stock" : "6942", "trim" : "AWD 4dr Tech Pkg SUV", "vin" : "5J8TB2H53BA000334", "year" : 2011 }, { "body" : "Sedan", "color" : { "ext" : "Premium white pearl", "int" : "Taupe" }, "id" : "275622", "make" : "Acura", "miles" : 40923, "model" : "TSX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JMC6/001_1765.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.4L L4 MPI DOHC 16V", "trans" : "Automatic" }, "price" : { "net" : 22288 }, "stock" : "6945", "trim" : "4dr Sdn I4 Auto Sedan", "vin" : "JH4CU2F66AC011933", "year" : 2010 } ] here are two index, There are almost 5000 index like this. I parsed this json like this: var url = "inventory/inventory.json"; $.getJSON(url, function(data){ $.each(data, function(index, item){ //straight-forward loop if(item.year == 2012) { $('#desc').append(item.make + ' ' + item.model + ' ' + '<br/>' + item.price.net + '<br/>' + item.pic[0].full); } }); }); This is working fine.But the problem is that, this searching and fetching process is little bit slow as there are 5000 indexes already and it's increasing day by day. It seems that, it is a straight-forward loop to parse the data and a normal brute-force method. Now I want to know if there any time efiicient way to parse more faster.Any faster method to parse instead of straight-forward loop ?

    Read the article

  • Purchase existing domain and transfer to new registrar

    - by Kiefer
    I am purchasing an existing domain from the owner who has it registered with GoDaddy. I want to transfer the domain to another registrar and of course have it under my name. If they update the registrant info to my name then it will lock down for 60 days. That's no good. If they simply transfer it to my registrar, how will they update the registrant info? I know about escrow services, but I don't feel I need one because I trust the seller and the amount is (relatively) small. Advice? Thanks!

    Read the article

  • Changing the RSS and Dynamic Views layout when using Blogger as a Podcast index

    - by Stuart
    I'm trying to set up a podcast service at present. This is just a 'spare time' task - so I wanted a quick, easy way to do it. To get this working: I've ripped (with owner permission) some YouTube content across to MP3 and hosted this content on Azure Blob Storage. I've posted blog posts - with linked mp3 content - inside a Blogger website. I've registered the RSS feed with iTunes This all seems to be working OK - http://dotnetmobilepodcast.blogspot.co.uk/ However, when it comes to a couple of final touches, then I'm hitting problems. RSS I would like to add iTunes metadata to the RSS feed. However, I can't find any way to do this inside the Blogger system. To workaround this I've tried using FeedBurner with its StreamCast plugin. However, the output from FeedBurner doesn't seem to be accepted by iTunes - e.g. http://feeds.feedburner.com/MobileAppCSharpPodcasts leads to this very unhelpful 11111 message: Is there any other way I can get this iTunes metadata content into the Blogger RSS feed - e.g. maybe an alternative service or a Yahoo! Pipe? Showing the MP3 files in the Blog I'm trying to work out how to automatically display the linked enclosures inside the blog posts - do the blogger Dynamic Views don't seem to have any way of doing this? I've found the HTML in those views very difficult to follow. If necessary I can workaround this using manual entries into each blog post... but I'd prefer to do this programatically if I can.

    Read the article

  • My First robots.txt

    - by Whitechapel
    I'm creating my first robots.txt and wanted to get a second opinion on it. Basically I have a FTP setup on my board for some special users to transfer files between each other and I do NOT want that included in the search by the bots. I also want to point to my sitemap which gets auto generated by a PHP page. So here is what I have, what else should I include, and if I need to fix anything with it? Also, it's linking to xmlsitemap.php because that generates the sitemap when called. My goal is to allow any search bot crawl the forums to grab meta data. User-agent: * Disallow: /admin/ Disallow: /ali/ Disallow: /benny/ Disallow: /cgi-bin/ Disallow: /ders/ Disallow: /empire/ Disallow: /komodo_117/ Disallow: /xanxan/ Disallow: /zeroordie/ Disallow: /tmp/ Sitemap: http://www.vivalanation.com/forums/xmlsitemap.php Edit, I'm not sure how to handle all the user's folders under /public_html/ since the robots.txt will be going in /public_html.

    Read the article

  • Too many access denied errors showing in Google Webmaster Tools every day

    - by user2255733
    I get 18,000 access denied error showing in Google Webmaster Tools every day! So strange it shows for URL's with www and not no-www. Fetch as Google works perfectly for pages got that error. Google starts to downgrade my website - impressions have dropped from 35,000 to 18,000. I am using cloud flair CDN and .htaccess mod_rewrite. Any help will be extremely appreciated as I am really loosing control.

    Read the article

  • Why are the custom campaign parameters in Google Analytics so long?

    - by Baumr
    Adding several Google Analytics custom campaign parameters can make URLs very long. For example, in Google's own examples: http://www.example.com/?utm_campaign=spring&utm_medium=referral&utm_source=exampleblog http://www.example.com/?utm_campaign=spring&utm_medium=email&utm_source=newsletter1 http://www.example.com/?utm_campaign=spring&utm_medium=email&utm_source=newsletter1&utm_content=toplink Is there shorter alternatives that GA will pick up?

    Read the article

  • Use a custom domain and point to Tumblr blog

    - by jskye
    My domain mydomain.com is registered with GoDaddy. I wish to host my Tumblr blog on this domain with Nearly Free Speech hosting. My active nameservers at GoDaddy already point to my authoritative ones at Nearly Free Speech which is working. However I'm baffled as to how to get my correct configuration to point to my Tumblr. Preferably I'd like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com. If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My third preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and I guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent Nearly Free Speech IP at mydomain.nfshost.com and when I try to add: (1) an A record pointing mydomain.com to the IP 66.6.44.4 as per Tumblr's instructions it tells me I already have the bare domain as an alias so I cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when I tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was both of them continually redirecting to each other until an error was thrown. So I was wondering if there is a ninja that could save me some hair-pulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • Why don't TITLE tags get indexed in google?

    - by Sam
    Hi folks, a question: When I seach "Ride On" + my sites name, I see its indexed But when I search for "Green Horse" + my site's name, I dont see my site appearing in the results anywhere! <td><a href="#" title="Green Horse Ride">Ride On</a></td> Question1 Does this mean that title="" attributes are not indexed/shown by google at all? Question2 What is better to use ? Alt? what are my other alternatives except title and alt?

    Read the article

  • Consolidating multiple domain names

    - by Mike
    I have a client that has three separately hosted copies of their website, each on a separate domain name. The websites are all essentially the same, bar a few discrepancies caused by badly managed updates in the past. I will soon be launching a completely new website for them, at which point, all three domain names are to resolve to the same web server. One domain name will become the default domain name that they refer to in all their literature, and the other two will simply be used as catch-alls for old links, bookmarks, and so on. I would like to know what people consider the best route to achieve this. My plan so far is: Get the new site up and running on the new webserver. Change the relevant A record of the default domain name to point to the new webserver. a) Keep the existing hosting accounts in operation. Create a list of 301 redirects from old page names on the old site to new page names on the new site. or b) Configure CNAME records for the non-default domain names, each pointing to the new webserver. Create a list of 301 redirects on the new site that redirect from old page names to new page names. If my understanding is correct, 3a will help to maintain whatever search engine rankings the sites already have (I know it's not going to be perfect), while at the same time informing search engines that the old domain names are no longer in use. What's a good approach to take here?

    Read the article

  • Wordpress Queue like Tumblr?

    - by Michael Hopkins
    Hi. Is there a way to give Wordpress the queue functionality that Tumblr has? Tumblr's queue, for those who don't know, is a way to space posts out without assigning specific post dates. For example, a Tumblr queue might be set to post every four hours between 9am and 5pm. Tumblr would drop the front post in the queue at 9am, 1pm and 5pm every day. Posts are added to the queue by clicking "add to queue" instead of "publish." It's quite simple. How can this feature be added to Wordpress?

    Read the article

  • Synchronise Database between servers via php [closed]

    - by Emmanuel
    Hi Guys, I'm needing to synchronise two mysql databases between different servers on a regular basis, by a client-initiated interface. I've been doing it by remote MYSQL connection, and adding the IP of the servers to the whitelist for MYSQL remote connections. Problem is however, that the client has a dynamic IP, so as soon as it changes they can no longer sync. So I'm trying to find an alternative way of synchronising the two databases via some sort of secure php script.

    Read the article

  • Links shortener with advanced reporting?

    - by Qualcuno
    I am serching for a script (preferably in PHP) or an external solution which lets me create an "url shortener" with advanced reports. We have been using Google Short Links for a while: it works really well, but it lacks reporting (it only displays a counter with the total number of redirects). Our setup is as follows: "go.mydomain.com" points to the web service, and we can create links such as "go.mydomain.com/product1". What I'm looking for is a similar service (or self-hosted solution) but with advanced reports, so we can track redirects by day, month, etc, distinguish between mobile and desktop users (very important!) and so on.

    Read the article

  • Mail Hosting That Will Allow Outbound Bulk Mail?

    - by user249493
    No, I'm not a spammer! I do volunteer work for a non-profit social services agency. They send out daily email with several hundred recipients on each message. Their web hosting company has been flagging the email as spam due to the volume. So I'm looking for an email hosting provider that won't do that. (I can separate out the web hosting function; we just need mail hosting right now.) They can't use something like MailChimp, Constant Contact, or Vertical Response because some of the mail is just inbound emails they aggregate and send out, and they don't want the overhead of "rebuilding" it in a "newsletter" service. I think that Google Apps for Business might be a good solution, but the pricing is just too high for this under-funded non-profit. I've applied for the non-profit discount but haven't heard back yet. Is there mail hosting service that might fit their needs? Thanks in advance.

    Read the article

  • Download Monitoring for MovieMusic Portal

    - by VenomVipes
    Our portal is targeted on Mobile Users. We have Music(mp3) Video(3gp) files for download. I expect 300 Parallel Downloads. I want a way to control my Downloads. Like Kicking/Ban a IP or download. Stastics of download. Bandwidth Consumed .... I have root/admin access to my Server. My Question is : Is there a way I can Monitor & Control the OnGoing downloads that visitors are doing from my Site.

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

  • I Need a recommendation for a CMS application with ECommerce

    - by Griff
    Does anyone have any recommendation for an open source solution for a robust CMS application that has a fully featured ECommerce module? I have been looking into Drupal with Ubercart -- but it looks like Ubercart is not fully up to speed with Drupal 7, and the other modules for Ecommerce don't look as robust. The CMS system should support CMIS as both client and server, and be able to run in a cloud computing environment. The system could be written in any standard web programming language, although Java would be my preference. I'm posting this question here because it seems that all CMS systems provide ECommerce as an afterthought, rather than a core feature.

    Read the article

  • Open Source PHP based secure file download script?

    - by SiddharthP
    Basically I need a self hosted solution where I as the admin can create client areas (which can be simple folders) where I upload files and secure them with username / pass. A client page will then be automatically generated which the client can access the username / pass and download the files. It's relatively simple script but i'm having a hard time finding open source solutions which accomplish what i need. Any help would be appreciated.

    Read the article

  • Google suddenly only indexes https and not http

    - by spender
    So all of a sudden, searches for our site "radiotuna" give out the result as an HTTPS link. https://www.google.com/?q=radiotuna#hl=en&safe=off&output=search&sclient=psy-ab&q=radiotuna&oq=radiotuna&gs_l=hp.12...0.0.0.3499.0.0.0.0.0.0.0.0..0.0.les%3B..0.0...1c.LnOvBvgDOBk&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=177c7ff705652ec3&biw=1366&bih=602 We only use https for the download of two specific files (these urls are resources used for autoupdate functionality of an app we distribute). All other parts of the site should be served over http. We wouldn't like to see any other traffic over https, nor any of our site links to appear in search engines as https. I'd like to address this issue. It seems that the following solutions are available: hand out an https specific robots.txt as such: User-agent: * Disallow: / and/or at app-level, 301 permanent redirect all requests (except the two above) to HTTP if they come in as HTTPS. My concern with the robots method is that, say (for some reason) google decided not to index http pages, disallowing https pages might mean that google has nothing left to index with disastrous consequences for our ranking. This means I'm inclined to go with a 301 redirect. Any thoughts?

    Read the article

  • Is it possible to make CSS-added text searchable by a browser?

    - by Andrew Stacey
    I run a website that uses CSS pseudo classes to insert text here and there. One of them inserts the value of a CSS counter (whereupon it would require considerable re-engineering of the system to do this without CSS text injection). The specific CSS rule is: .num_defn .theorem_label:after { content: " " counter(definition, decimal); counter-increment: definition; } and this converts "Definition" to "Definition 1" (say). However, the injected text is not searchable by the browser. It doesn't see the 1: if I search for "Definition 1" then it doesn't find it, and if I search for "Definition. Whatever the definition text was" then the browser happily highlights the line except for the inserted 1. So if you imagine the bold text as the highlighting, it would look like: Definition 1 . Whatever the definition text was This is not ideal! People like to refer to definitions by their number and to say "Look at Definition 1 on the page XYZ" (and in contexts where hyperlinks are not available - strange, I know, but it does happen). Thus: Is there any way that I, on the server end, can designate the injected text as "searchable"? If not, is there a simple way at the browser end that this can be enabled?

    Read the article

  • How to to let Google know about dynamic content?

    - by Yaniv
    Im looking for the best practice to let Google know about a vast number of dynamically created content. Let's say (I mean - dream) that I'm Facebook, and I want to let Google to index all the users' posts. Sitemap.xml may be the answer for this but they are limited to 50,000 URLs in each site map. I know that I can create 500 sitemaps and create a sitemap for sitemaps, but they are also limited, 25,000,000 URLS sounds quite enough at the moment, but could cause problems in the future. I.E - stackoverflow already has 3 Million posts, probably sitemap is not the solution for them. Creating a page with paging, and links to all the dynamic data. i guess this is what stackoverflow did by creating this page here: http://stackoverflow.com/questions So I think that Option 2 is the answer, but it seems to me that sitemaps might have some added value. So what should i do?

    Read the article

  • Will Google crawl session based website

    - by DonShwep
    I have a website, it is split into 3 categories but using PHP its an all-in one kind of style. When a user chooses a category on the home page a session is set, this is then used to set the style and contents of the website. Would Googlebot and other bots be able to still scan my website? If a page is accessed and no session is set then the user is sent back to the home page. I have created special links, that set a session but go straight to the contact page. Even this page doesn't seem to be showing up. Any ideas if a sitemap with specially crafted links (to set the session) will help Google?

    Read the article

  • Business not showing up on right hand side of google search

    - by Chris
    The business I work has currently has a verified business Google+ page and in the past this page has shown up as a thumbnail during Google searches. The thumbnail brings up basic information such as our picture and operating hours etc. However, since verifying the business page, the thumbnail overview of our business does not seem to show up anymore. I have tried Google searching our business on several computers and it still just brings up the normal search results. Is there a setting I need to activate in order for the thumbnail to appear? Thanks

    Read the article

  • How can I create an SPF record on my 1and1.com hosted domain?

    - by tnorthcutt
    Emails from my domain (hosted at 1and1, and using Google Apps Premier edition) have sporadically been going to recipients' spam folders lately. I did some research, tested, and found out that I do not have an SPF record for my domain. According to this Google Support page, I need to create one. Following the steps on that page is easy, until I get to #3: Create a TXT record containing this text: v=spf1 include:_spf.google.com ~all I see no way to create a "TXT record". Here is a screenshot of the admin panel:

    Read the article

  • Update Google Sitemap for Mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >