Search Results

Search found 20088 results on 804 pages for 'binary search trees'.

Page 294/804 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Should data structures be integrated into the language (as in Python) or be provided in the standard library (as in Java)?

    - by Anto
    In Python, and most likely many other programming languages, common data structures can be found as an integrated part of the core language with their own dedicated syntax. If we put LISP's integrated list syntax aside, I can't think of any other languages that I know which provides some kind of data structure above the array as an integrated part of their syntax, though all of them (but C, I guess) seem to provide them in the standard library. From a language design perspective, what are your opinions on having a specific syntax for data structures in the core language? Is it a good idea, and does the purpose of the language (etc.) change how good this could be of a choice? Edit: I'm sorry for (apparently) causing some confusion about which data structures I mean. I talk about the basic and commonly used ones, but still not the most basic ones. This excludes trees (too complex, uncommon), stacks (too seldom used), arrays (too simple) but includes e.g. sets, lists and hashmaps.

    Read the article

  • Interconnect nodes in a Java distributed infrastructure for tweet processing

    - by David Moreno García
    I'm working in a new version of an old project that I used to download and process user statuses from Twitter. The main problem of that project was its infrastructure. I used multiple instances of a java application (trackers) to download from Twitter given an specific task (basically terms to search for), connected with a central node (a web application) that had to process all tweets once per day and generate a new task for each trackers once each 15 minutes. The central node also had to monitor all trackers and enable/disable them under user petition. This, as I said, was too slow because I had multiple bottlenecks, so in this new version I want to improve the infrastructure and isolate all functionalities in specific nodes. I also need a good notification system to receive notifications for any node. So, in the next diagram I show the components that I'll need in this new version: As you can see, there are more nodes. Here are some notes about them: Dashboard: Controls trackers statuses and send a single task to each of them (under user request). The trackers will use this task until replaced with a new one (if done, not each 15 minutes like before). Search engine: I need to store all the tweets. They are firstly stored in a local database for each tracker but after that I'm thinking on using something like Elasticsearch to be able to do fast searches. Tweet processor: Just and isolated component with its own database (maybe something like the search engine to have fast access to info generated by the module). In the future more could be added. Application UI: A web application with a shared database with the Dashboard (mainly to store users information and preferences). Indeed, both could be merged into a single web. The main difference with the previous version of the project is that now they will be isolated and they will only show information and send requests. I will not do any heavy task in them (like process tweets as I did before). So, having this components, my main headache is how to structure all to not have to rewrite a lot of code every time I need to access any new data. Another headache is how can I interconnect nodes. I could use sockets but that is a pain in the ass. Maybe a REST layer? And finally, if all the nodes are isolated, how could I generate notifications for each user which info is only in the database used by the Application UI? I'm programming this using Java and Spring (at least I used them in the last version) but I have no problems with changing the language if I can take advantage of a tool/library/engine to make my life easier and have a better platform. Any comment will be appreciated.

    Read the article

  • BizTalk 2009 - Service Instances: Last 100

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received, the Last 100 Messages Sent, and the Last 50 Suspended Messages queries so what about service instances? The BizTalk 2009 Group Hub allows you to search for suspended service instances and also running service instances, but not the two together. In BizTalk 2004 we had a query in HAT to return the last 100 service instances.  Lets create a direct replacement in the BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last one hundred tracked service instances:

    Read the article

  • Picking the Right Keywords For SEO Success

    It is important to realize that picking the right key words is crucial to your SEO success. Always remember that for search engine optimization, your end goal is to rank high in the search engines for key words most relevant and valuable to your web site. For example, if you run a pet dog business, you naturally want to rank high for key words such as 'pet dogs', 'dogs for sale', 'pet dogs for sale'. Better yet, you can narrow down the key words to target very specific niches such as 'chihuahua pet dogs, pet dogs for sale in Brooklyn' etc.

    Read the article

  • Copy only folders not files?

    - by Shannon
    Is there a way to copy an entire directory, but only the folders? I have a corrupt file somewhere in my directory which is causing my hard disks to fail. So instead of copying the corrupt file to another hard disk, I wanted to just copy the folders, because I have scripts that search for hundreds of folders, and I don't want to have to manually create them all. I did search the cp manual, but couldn't see anything (I may have missed it) Say I have this structure on my failed HDD: dir1 files dir2 files files dir4 dir3 files All I a want is the directory structure, not any files at all. So I'd end up with on the new HDD: dir1 dir2 dir4 dir3 Hoping someone knows some tricks!

    Read the article

  • Google Cache showing wrong URL

    - by Sathiya Kumar
    I searched the cache details of the URL http://property.sulekha.com/pune-properties but the Google Cache showing details for property.sulekha.com. I don't know why it's showing like this. Not only for http://property.sulekha.com/pune-properties but also for all the Indian city relates URL's like http://property.sulekha.com/chennai-properties , http://property.sulekha.com/mumbai-properties , http://property.sulekha.com/kolkata-properties etc. Even i don't find these urls in the Google search result. If i search Chennai properties in Google, i find property.sulekha.com and not http://property.sulekha.com/chennai-properties . Why its happening like this? Please let me know

    Read the article

  • VPN disconnected: resolv.conf not refreshed

    - by cwall
    I connect to VPN using vpnc. When VPN disconnects, either via time out or the session limit is reached, VPN is terminated, but resolve.conf continues to contain references to my VPN network. resolv.conf before VPN is connected: nameserver 127.0.0.1 search mylocalnetwork resolv.conf after VPN is connected and remains once VPN is lost: nameserver X.X.X.X nameserver X.X.X.Z nameserver 127.0.0.1 search internal.mycompany.com mylocalnetwork In 10.04, when VPN lost, I'd run this script to refresh resolve.conf: 7$ cat bin/refreshResolvconf.sh #!/bin/bash #if [ -e /etc/resolvconf/run/interface/tun0 -a "`pidof vpnc`" == "" ]; then /sbin/resolvconf -d tun0; fi if [ -e /etc/resolvconf/run/interface/tun0 -a "`pidof vpnc`" == "" ] then /sbin/resolvconf -d tun0; echo "Refreshed resolv.conf" fi But, resolveconf changed in 12.04 changed, so this script is no longer applicable. To resolve, I manually edit resolve.conf or turn off/on my connection via "gnome-control-center network". Anyone else have the same problem? How can resolv.conf be updated post-VPN disconnect?

    Read the article

  • "Popular searches for this page" links with links to the same page, SEO difference?

    - by Rory McCann
    I've seen a few pages that have a section with "Popular searches for this page" and then have the search terms in a link pointing back to the same page (e.g. http://theenglishchillicompany.co.uk/the-complete-chilli-pepper-book-a-gardeners-guide-to-choosing-growing-preserving-and-cooking/) I assume they are doing it for SEO purposes (with more links to the page with the desired search terms). Does this make a difference? It seems strange that a link on page A to page A would be counted! Am I wrong?

    Read the article

  • What is the name of this tree?

    - by Daniel
    It has a single root and each node has 0..N ordered sub-nodes . The keys represent a distinct set of paths. Two trees can only be merged if they share a common root. It needs to support, at minimum: insert, merge, enumerate paths. For this tree: The +-------+----------------+ | | | cat cow dog + +--------+ + | | | | drinks jumps moos barks + | milk the paths would be: The cat drinks milk The cow jumps The cow moos The dog barks It's a bit like a trie. What is it?

    Read the article

  • How to perform efficient 2D picking in HTML5?

    - by jSepia
    I'm currently using an R-Tree for both picking and collision testing. Each entity on screen has a bounding box for collisions and a separate one for picking. Since entities may change position very frequently, both trees must be updated/reordered once per frame. While this is very efficient for collisions, because the tree is used in hundreds of collision queries every frame, I'm finding it too costly for picking, because it only gets queried when the user clicks, thus leading to a lot of wasted tree updates. What would be a more efficient way to implement picking without as much overhead?

    Read the article

  • SEO and Spelling mistakes in keyword

    - by Sushil
    I am about to register a domain name (suppose) someone.com (with proper spelling), in mind targeting the keyword "SOMEONE". But then I discovered on 'google keyword research tool' that not this but a typo "SOME1" seems to be more popular and people search this significantly more often than the proper keyword. And luckily someone.com and some1.com both are available. I understand that I can register both the domains, but I don't know on which should I keep my website and redirect the other one. Should I make the typo "some1.com" my base site? But that's a typo. P.S., my site has a totally relevant content and not just keyword targeted worthless site. What do you guys suggest? I am confused. How would that affect my SEO ranking?? EDIT: Because the competition for the keyword I am targeting is fairly low, I think nevertheless whatever domain I choose, it will appear on the search engine first page.

    Read the article

  • JavaOne Content Catalog Live!

    - by programmarketingOTN
    The JavaOne Content Catalog—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more for San Francisco 2012—is live!In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels.Start checking out JavaOne content now to plan your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. Happy browsing! 

    Read the article

  • Restricting crawler activity to certain directories with robots.txt

    - by neimad
    I would like to use robots.txt to prevent indexing of some parts of my website. I want search engines to index only the / directory and not search inside my controllers. In my robots.txt, I have this: User-Agent: * Disallow: /compagnies/ Disallow: /floors/ Disallow: /spaces/ Disallow: /buildings/ Disallow: /users/ Disallow: / I put this file in /mysite/public. I tested the file with a robots.txt validator and got no errors. However, Google always returns the result of my site. For testing, I added Disallow: /, but again, Google indexed all pages. floors, spaces, buildings, etc. are not physical directories. Is this a bug? How can I work around it?

    Read the article

  • Webapps don't open correctly when using Chromium

    - by Alex
    I have just installed Ubuntu 12.10 completely fresh, the old version of Ubuntu was discarded or overwritten (or whatever you call it). I want to use the Ubuntu webapps with Chromium but I've had several problems. }The first problem is that Chromium won't ask me if I want to install a webapp if I go to a supported site (and I don't already have the webapp installed). The second problem is that when I install the webapp by visiting the site in Firefox, and then I try to open it in Chromium, Ubuntu will open a completely new Chromium icon and window in the Launcher, and the icon will be labeled "Untitled"; also there is no search bar in the new window, only the tab at the top. I've tried using several webapps with Firefox set as the default browser and they work as expected: once the webapp icon is clicked a Firefox window is opened on the Firefox launcher icon, and the window has 'new tab' button and search bar.

    Read the article

  • My First robots.txt

    - by Whitechapel
    I'm creating my first robots.txt and wanted to get a second opinion on it. Basically I have a FTP setup on my board for some special users to transfer files between each other and I do NOT want that included in the search by the bots. I also want to point to my sitemap which gets auto generated by a PHP page. So here is what I have, what else should I include, and if I need to fix anything with it? Also, it's linking to xmlsitemap.php because that generates the sitemap when called. My goal is to allow any search bot crawl the forums to grab meta data. User-agent: * Disallow: /admin/ Disallow: /ali/ Disallow: /benny/ Disallow: /cgi-bin/ Disallow: /ders/ Disallow: /empire/ Disallow: /komodo_117/ Disallow: /xanxan/ Disallow: /zeroordie/ Disallow: /tmp/ Sitemap: http://www.vivalanation.com/forums/xmlsitemap.php Edit, I'm not sure how to handle all the user's folders under /public_html/ since the robots.txt will be going in /public_html.

    Read the article

  • Is my robots.txt working as it should?

    - by TigerBlood
    I want crawlers to have access to http://www.example.com but not http://www.example.com/ My robots.txt is as follows: User-agent: * Allow: /$ Disallow: / My site is in google search results, but I am not coming up in Bing, Yahoo, etc. I have had the same robots.txt since last year, and I initially requested inclusion ~1 year ago, having also resubmitted the URL to those latter search engines several times since as well. Is my robots.txt blocking those other crawlers? And if so, why not google as well? Thanks in advance!

    Read the article

  • Is Clojure, Scala and other restrained by the JVM vs CLR

    - by jia93
    The Java implementors seem slow to adopt language improvements, for example compare C# with full closures, expression trees, LINQ etc.. to Java, and even the push back of some stuff to Java 8 will still leave it behind the current implementation of C#. However since I dont intend to use either Java or C# that particular language war isnt of interest too much, im more concerned with the JVM vs CLR. Is this lagging-behind also applicable to the JVM? Will Scala, Clojure etc.. will they be able to continue to innovate or score optimal performance in the face of slowly progressing underlying VM such as JVM? Is Clojure/Scala restrained at present by JVM limitations?

    Read the article

  • Blogging & SEO - They Go Hand in Hand

    You write a blog loyally every day or so. You provide informative, fascinating substance for your faithful readers. You've even got a number of member links in there, too. But is that this enough to induce great search engine results for your hard work? In all probability not. Certain, you'll get listed with the search engines effortlessly. But without a high twenty listing at one among the majors (Google, Yahoo! or MSN), you will not have traffic, literally, banging down your door....

    Read the article

  • Is this Anti-Scraping technique viable with Crawl-Delay?

    - by skibulk
    I want to prevent web scrapers from abusing 1,000,000 on my website. I'd like to do this by returning a "503 Service Unavailable" error code for users that access an abnormal number of pages per minute. I don't want search engine spiders to ever receive the error. My inclination is to set a robots.txt crawl-delay which will ensure spiders access a number of pages per minute under my 503 threshold. Is this an appropriate solution? Do all major search engines support the directive? Could it negatively affect SEO? Are there any other solutions or recommendations?

    Read the article

  • Installing Cairo to get FastRWeb working for R gWidgetsWWW2 -pkg

    - by hhh
    I want to install FastRWeb for R but it requires some Cairo. How can I install the Cairo? compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/Cairo’ ERROR: dependency ‘Cairo’ is not available for package ‘FastRWeb’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/FastRWeb’ The downloaded packages are in ‘/tmp/Rtmpno8hhF/downloaded_packages’ Warning messages: 1: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'Cairo' had non-zero exit status 2: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'FastRWeb' had non-zero exit status I cannot find what the Cairo is here, 16 entries with this search term below. It is apparently some library. $ apt-cache search libcairo|wc 16 132 996 Perhaps related http://stackoverflow.com/questions/9826128/r-making-r-rook-program-into-rscript-program-r http://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s Some related packages FastRWeb and RServe for the gWidgetsWWW2 -pkg.

    Read the article

  • AndEngine Foreground Sprite

    - by McGrey
    I'm developing an Android game and have some troubles: I want to add some foreground sprites, that must obstruct my player. Se the following example: Its a screenshot from "Shinobi 3". We can see the player, the enemy, the background and two foreground trees, that hide the player's arm and part of the enemy. I'm using AndEngine GLES2 Anchor Center and I am trying to add a new layer to my scene. Sprite Forest = new Sprite(getWidth() * 0.5f, textureHeightForest * 0.5f + 100, ResourcesManager.getInstance().foreground_forest_region, vbom); Entity foregroundLayer = new Entity(); foregroundLayer.attachChild(hillFurthest); attachChild(foregroundLayer); But it still shows behind my player sprite. I am trying to find something in HUD-class (it's always shown in the foreground), but got no results. Can anyone help please?

    Read the article

  • Why are we being twitter spammed?

    - by Tom Gullen
    This is a search relating to us: https://twitter.com/#!/search/realtime/scirra We're getting a of of new accounts tweeting: The Layers Bar - Scirra.com Firstly this is not us doing it as we're quite proud of doing everything completely whitehat. Also this tweet doesn't make any sense, "The Layers Bar" seems to be referring to a manual entry of ours. They all seem to be new accounts with no followers and no prior tweets coming in like clockwork every hour. Does anyone know why this could be happening? Could this harm us? It it possible to find out the source of this? I should mention I'm hesitant to report them all as spam because it could look like we are the culprits.

    Read the article

  • Is it common to prototype in a higher level language?

    - by Mark Canlas
    I'm currently toying with the idea of embarking on a project that far exceeds my current programming ability in a language I have very little real world experience in (C). Would it be valuable to prototype in a higher level language that I'm more familiar with (like Perl/Python/Ruby/C#) just so I can get the overall design going? Ultimately, the final product is performance sensitive, hence the choice of C, but I'm afraid not knowing C well will make me lose the forest for the trees. While searching for similar questions, I noticed one fellow mention that programmers used to prototype in Prolog, then crank it out in assembler.

    Read the article

  • Warm Up Your Desktop with the Caribbean Shores Theme for Windows 7 & 8

    - by Asian Angel
    Are you in the mood for some tropical scenery? Then enjoy a view of quiet coves, clear water, palm trees, and gently rolling surf with the Caribbean Shores Theme for Windows 7 and 8. The theme comes with twelve awesome images to provide the perfect relaxing environment on your desktop. Download the Caribbean Shores Theme [Windows 7 & 8 Personalization Gallery] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Cleaning a dataset of song data - what sort of problem is this?

    - by Rob Lourens
    I have a set of data about songs. Each entry is a line of text which includes the artist name, song title, and some extra text. Some entries are only "extra text". My goal is to resolve as many of these as possible to songs on Spotify using their web API. My strategy so far has been to search for the entry via the API - if there are no results, apply a transformation such as "remove all text between ( )" and search again. I have a list of heuristics and I've had reasonable success with this but as the code gets more and more convoluted I keep thinking there must be a more generic and consistent way. I don't know where to look - any suggestions for what to try, topics to study, buzzwords to google?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >