Search Results

Search found 16082 results on 644 pages for 'faceted search'.

Page 260/644 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • How to create robots.txt for a domain that contains international websites in subfolders?

    - by aaandre
    Hi, I am working on a site that has the following structure: site.com/us - us version site.com/uk - uk version site.com/jp - Japanese version etc. I would like to create a robots.txt that points the local search engines to a localized sitemap page and has them exclude everything else from the local listings. So, google.com (us) will index ONLY site.com/us and take in consideration site.com/us/sitemap.html google.co.uk will index only site.com/uk and site.com/uk/sitemap.html Same for the rest of the search engines, including Yahoo, Bing etc. Any idea on how to achieve this? Thank you!

    Read the article

  • Migrate Thunderbird 3 Saved Searches Between Accounts

    - by UltraNurd
    Long story short, the sysadmins have moved me to a new mailserver. In the process, they needed to create a separate account in Thunderbird and disable my old account. They took care of all of the mail migration. However, my saved search folders didn't go along for the ride. I have over 20 complex searches that I'd rather not have to reenter manually by hand. You can't drag saved searches between accounts like other folders. I tried closing Thunderbird, doing a find/replace in virtualFolders.dat in my Thunderbird profile folder, saving that file, and reopening Thunderbird, but that didn't appear to do anything. I'm assuming the search folders are also saved in one of the sqlite databases... does anyone know where to look?

    Read the article

  • Android Card Game Database for Deck Building

    - by Singularity222
    I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble.

    Read the article

  • Batch file to uninstall all Sun Java versions?

    - by Ricket
    I'm setting up a system to keep Java in our office up to date. Everyone has all different versions of Java, many of them old and insecure, and some dating back as far as 1.4. I have a System Center Essentials server which can push out and silently run a .msi file, and I've already tested that it can install the latest Java. But old versions (such as 1.4) aren't removed by the installer, so I need to uninstall them. Everyone is running Windows XP. The neat coincidence is that Sun just got bought by Oracle and Oracle has now changed all the instances of "Sun" to "Oracle" in Java. So, I can conveniently not have to worry about uninstalling the latest Java, because I can just do a search and uninstall all Sun Java programs. I found the following batch script on a forum post which looked promising: @echo off & cls Rem List all Installation subkeys from uninstall key. echo Searching Registry for Java Installs for /f %%I in ('reg query HKLM\SOFTWARE\microsoft\windows\currentversion\uninstall') do echo %%I | find "{" > nul && call :All-Installations %%I echo Search Complete.. goto :EOF :All-Installations Rem Filter out all but the Sun Installations for /f "tokens=2*" %%T in ('reg query %1 /v Publisher 2^> nul') do echo %%U | find "Sun" > nul && call :Sun-Installations %1 goto :EOF :Sun-Installations Rem Filter out all but the Sun-Java Installations. Note the tilda + n, which drops all the subkeys from the path for /f "tokens=2*" %%T in ('reg query %1 /v DisplayName 2^> nul') do echo . Uninstalling - %%U: | find "Java" && call :Sun-Java-Installs %~n1 goto :EOF :Sun-Java-Installs Rem Run Uninstaller for the installation MsiExec.exe /x%1 /qb echo . Uninstall Complete, Resuming Search.. goto :EOF However, when I run the script, I get the following output: Searching Registry for Java Installs 'DEV_24x6' is not recognized as an internal or external command, operable program or batch file. 'SUBSYS_542214F1' is not recognized as an internal or external command, operable program or batch file. And then it appears to hang and I ctrl-c to stop it. Reading through the script, I don't understand everything, but I don't know why it is trying to run pieces of registry keys as programs. What is wrong with the batch script? How can I fix it, so that I can move on to somehow turning it into a MSI and deploying it to everyone to clean up this office? Or alternatively, can you suggest a better solution or existing MSI file to do what I need? I just want to make sure to get all the old versions of Java off of everyone's computers, since I've heard of exploits that cause web pages to load using old versions of Java and I want to avoid those.

    Read the article

  • Domain Computers Not Listed In Network

    - by Giawa
    Our network computers are all connected to a domain, and I can see them if I search the active directory (I can click 'search active directory' and then select 'computers' and then Find Now, and all of the computers will appear). However, the computers are not listed in the network browser on any of our computers (Win XP, Win7, Linux, etc) which are connected to the domain. DC is running Windows Server 2008 (Windows Server Standard) with a configured DNS and DHCP server. All of the IPs on our local network are static IPs, although I can't see how that would make a difference. I can still connect to computers on the network via \\computer_name, but I cannot browse them in 'network' or in 'my network places'. The computer browser service is not started on the DC, but I tried starting that and it had no effect. DC currently has the firewall configured as 'off' to try to debug this problem. Thanks in advance

    Read the article

  • Unity Dashboard won't find local files, rearrange icons on two computers

    - by Stanton.Sculpture
    Suddenly I can't move icons around my unity launcher and the Dash won't search for my local files and folders. Was working when I first installed 13.10, but now it won't search for local files, and it won't let me rearrange the icons in any way. I've tried turning on and off all the scopes (lenses?) in multiple combinations, but it won't find any files unless I use nautilus to find them its mostly unresponsive. I can't see my recently used files, or files and folders scope at all. Dragging and dropping the icons on the side dock doesn't work, they only stick to my mouse until I put them back where they were. I cannot unlock any icons from the launcher, it just doesn't do anything when I click it. I tried rebooting both of my computers and its still won't function normally. I used ubuntu-bug -w to report a bug, no one has gotten back to me. Is there some option that I changed to cause this? This is a problem on both my laptop and Desktop. Please Help, Alex

    Read the article

  • Rankings dropping after small URL-change WITH 301-redirect

    - by David
    Two weeks ago, we attempted to make the URLs of ca. 12 pages more search-engine friendly. We changed three things. 1. Make URLs more SEF from: /????-????/brandname.html (meaning: /aircon-price/daikin.html to: /????-brandnameinenglish-brandnameinthai.html We set up 301-redirects from the old to the new URLs. You can find an example and the link to our page here: http://bit.ly/XRoTOK There are no direct external links to the old URLs. 2. Added text to img-links from homepage to brand-pages Before those changes, we only linked to those brands with a picture, so we added some text under the picture. You can see that here, in the left submenu: http://bit.ly/XRpfoF 3. Minor changes to Title, h1-Tags, Meta Description, etc. Only minor changes, to better match the on-site optimization with targeted keywords. For example, before we used full brand names, after we used what was really searched for: from: Mitsubishi Electric Mr. Slim to: ???? Mitsubishi (means: Aircon Mitsubishi) Three days after these changes, we noticed a heavy drop (80% loss in non-paid search traffic) in rankings and traffic for those pages, and also for all pages which are sub-categorized. Rankings for all keywords not affected by the changes stayed the same. Any ideas, what happened, and how we can regain our old rankings? What we already did, was submitting a new sitemap. Help much appreciated. Best regards, David

    Read the article

  • How to remove settings from a Microsoft Account Windows 8?

    - by Stevie G
    When installing windows 8 for the first time, I did not create a microsoft account and just installed as the local user. However I recently updated to Windows 8.1 and it forces you to use a microsoft account. I did not want to create an account so one of my friends used his and I logged in. After logging in all the friend's details like apps, wallpaper, lock screen, search mechanism, when I use search i see the friends facebook friends popping up. it is really annoying. How can I remove all of this excess, as I have logged out of the microsoft account and am just using local user but these problems have persisted. Thanks

    Read the article

  • how to install expect for windows 64 bit?

    - by Master James
    1.Downloaded Active Tcl from http://www.activestate.com/activetcl/downloads/ 2.Installed @ c:/Tcl/ 3.Go to Bin directory in Command prompt (Start Run commad cd c:\Tcl\bin) 4.To install Expect, executed command teacup install Expect It Appears as : Resolving Expect ... Not found in the archives. While a more fuzzy search disregarding letter case and accepting substrings was done, we are sorry to say that it yielded no possible candidates for installation either. Questions to consider: Have you spelled the name correctly ? Including the proper case of characters ? Note that teacup's 'search' command allows you to locate packages by subject, categories, and the like. Aborting installation, was not able to locate the requested entity. How to install Expect for windows 7, 64 bit ?

    Read the article

  • Consolidating multiple domain names

    - by Mike
    I have a client that has three separately hosted copies of their website, each on a separate domain name. The websites are all essentially the same, bar a few discrepancies caused by badly managed updates in the past. I will soon be launching a completely new website for them, at which point, all three domain names are to resolve to the same web server. One domain name will become the default domain name that they refer to in all their literature, and the other two will simply be used as catch-alls for old links, bookmarks, and so on. I would like to know what people consider the best route to achieve this. My plan so far is: Get the new site up and running on the new webserver. Change the relevant A record of the default domain name to point to the new webserver. a) Keep the existing hosting accounts in operation. Create a list of 301 redirects from old page names on the old site to new page names on the new site. or b) Configure CNAME records for the non-default domain names, each pointing to the new webserver. Create a list of 301 redirects on the new site that redirect from old page names to new page names. If my understanding is correct, 3a will help to maintain whatever search engine rankings the sites already have (I know it's not going to be perfect), while at the same time informing search engines that the old domain names are no longer in use. What's a good approach to take here?

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Google reader Keyboard shortcuts not working in Firefox 3.6

    - by Jj
    I just upgraded to Ubuntu 10.04 which comes with Firefox 3.6.3. Now Google reader some keyboard shortcuts stopped working, The j/k keys are ok, but 'v', 'Shift+x' and others don't work and start the Search As You type functionality I've always used. The Javascript console only shows this warning: Warning: The 'charCode' property of a keyup event should not be used. The value is meaningless. Source File: http://www.google.com/reader/view/?tab=my#overview-page Line: 0 This did not happen with Firefox 3.5.x even though I've always had the Search as you type option enabled.

    Read the article

  • A terminal emulator for ex-Windows users

    - by Dan
    There are several things I would like to be better in Ubuntu Terminal Emulator. coloring, like in the source code Copy and paste keyboard shortcuts that I used all the time in Windows: Ctrl-C and Ctrl-V (Most of people here in Ubuntu use Ctrl+C and Ctrl+V keyboard shortcuts to copy and paste everywhere except the terminal! I think it's annoying for newcomers, and I don't worry about historical reasons) A feature to save all the output to log file UPDATE: Can the terminal be a powerful feature-full user-friendly tool like a modern IDE? The Linux user can spend 30% of time in the terminal. Programmers no longer code in a notepad. Can I see the history pane? Suggestions? Directory pane? Commands list? Search for words in an output? Contextual behavior? "Search in Google" for a mouse right-click. Tips and tricks learning? Time is money! Please, people, give me a link to the 21st - century terminal.

    Read the article

  • Gotomeeting MSI needs elevated privs?

    - by DrZaiusApeLord
    Typically I can deploy MSIs with no issue, but the Gotomeeting one refuses to install. SCE lists it as pending and AD just attempts to install it, gives up, and never tries again. When I tried running it by double-clicking its icon, it told me "needs to run with elevated privs." I don't see how I can get AD or SCE to run it with these higher privs. I can run it by using an elevated command prompt and running msiexec from there. The MSI is the one labeled "GoToMeeting MSI Installer (ZIP)" from here: http://support.citrixonline.com/GoToMeeting/search?search=msi Any ideas? I run an environment where the users are non-admins and would love to be able to upgrade this centrally.

    Read the article

  • How do I change until the next underscore in VIm?

    - by Nathan Long
    If I have this text in vim, and my cursor is at the first character: www.foo.com I know that I can do: cw to change up to the first period, because a word (lowercase w) ends at any punctuation OR white space cW to change the whole address, because a Word (uppercase w) ends only at whitespace Now, what if I have this: stupid_method_name and want to change it to this? awesome_method_name Both cw and cW change the whole thing, but I just want to change the fragment before the underscore. My fallback technique is c/_, meaning 'change until you hit the next underscore in a search,' but for me, that also causes all underscores to be highlighted as search terms, which is slightly annoying. Is there a specifier like w or W that doesn't include underscores?

    Read the article

  • Removing past searches from Google Chrome's omnibar

    - by Ram Rachum
    One time I searched for Orange Juice in Chrome's Omnibar. Now, every time I start typing Orange, I get the search suggestion: How do I get Chrome to stop offering me this search suggestion? If I need to edit some config file, I can do that. Please don't post answers if you haven't ensured they work first. (This is intended to prevent people from answering "Press Shift-Delete.") Clarification: I'd prefer a solution in which I can selectively delete entries, not just by time segment. I also prefer a solution that does not involve cancelling any Chrome functionality.

    Read the article

  • Should this folder called Data be indexed?

    - by panny
    In the indexing options of Windows 7 there is a folder called Data which is excluded from indexing for the C:\ drive by default. Can someone confirm this, please? I was not able to locate that folder on my drive, nor include it in the search index. The difference in number of indexed files is unsatisfying: windows-7 native indexing service:377703 files on six drives; third party desktop search indexing service:698654 files on the same number of drives. Files in UA Control seem not being indexed without proper priviledges. How can this be circumvented?

    Read the article

  • Finding the shortest path through a digraph that visits all nodes

    - by Boluc Papuccuoglu
    I am trying to find the shortest possible path that visits every node through a graph (a node may be visited more than once, the solution may pick any node as the starting node.). The graph is directed, meaning that being able to travel from node A to node B does not mean one can travel from node B to node A. All distances between nodes are equal. I was able to code a brute force search that found a path of only 27 nodes when I had 27 nodes and each node had a connection to 2 or 1 other node. However, the actual problem that I am trying to solve consists of 256 nodes, with each node connecting to either 4 or 3 other nodes. The brute force algorithm that solved the 27 node graph can produce a 415 node solution (not optimal) within a few seconds, but using the processing power I have at my disposal takes about 6 hours to arrive at a 402 node solution. What approach should I use to arrive at a solution that I can be certain is the optimal one? For example, use an optimizer algorithm to shorten a non-optimal solution? Or somehow adopt a brute force search that discards paths that are not optimal? EDIT: (Copying a comment to an answer here to better clarify the question) To clarify, I am not saying that there is a Hamiltonian path and I need to find it, I am trying to find the shortest path in the 256 node graph that visits each node AT LEAST once. With the 27 node run, I was able to find a Hamiltonian path, which assured me that it was an optimal solution. I want to find a solution for the 256 node graph which is the shortest.

    Read the article

  • How do I remove a URL from Google without having to have a Google E-mail Account

    - by PP
    Really simple question. I do not want a Google account. I just want Google to stop making requests every 2 minutes for a URL it should never have known about (apparently Google harvests URLs from search requests as well as private e-mails, not just from actual web pages). But when I search Google help for removing URLs it appears I have to use their "webmaster tools" which require logging into a GMail account! How do I tell Google not to index my URL without becoming a customer? Note: I already return 404 for the URLs in question using a rewrite rule - this appears to make zero difference to the crawler which continually attempts to fetch the page every 2 minutes.

    Read the article

  • Dell Multi-Monitor Hub: true DisplayPort splitting?

    - by thepurplepixel
    In my search for a new display, I came across the Dell Multi-Monitor Hub MMH11, which seemed to be an alternative to my search for daisy-chainable DisplayPort displays. However, before I cave and spend $179 on this device, I am wondering if this will be similar to other splitting devices where it appears to the computer as one big monitor and the device does the splitting (which I don't want). Or, does this use the packet-based nature of DisplayPort to present two/three separate displays to the computer? Also, would this device work on my MacBook Pro? (I know the Dell site says it's for Windows, but it also says that no driver installation is required. I'd assume since the MBP supports DP 1.2 it would work, but it's better to ask). Thanks!

    Read the article

  • Searching for entity awareness in 3D space algorithm and data structure

    - by Khanser
    I'm trying to do some huge AI system just for the fun and I've come to this problem. How can I let the AI entities know about each other without getting the CPU to perform redundant and costly work? Every entity has a spatial awareness zone and it has to know what's inside when it has to decide what to do. First thoughts, for every entity test if the other entities are inside the first's reach. Ok, so it was the first try and yep, that is redundant and costly. We are working with real time AI over 10000+ entities so this is not a solution. Second try, calculate some grid over the awareness zone of every entity and test whether in this zones are entities (we are working with 3D entities with float x,y,z location coordinates) testing every point in the grid with the indexed-by-coordinate entities. Well, I don't like this because is also costly, but not as the first one. Third, create some multi linked lists over the x's and y's indexed positions of the entities so when we search for an interval between x,y and z,w positions (this interval defines the square over the spatial awareness zone) over the multi linked list, we won't have 'voids'. This has the problem of finding the nearest proximity value if there isn't one at the position where we start the search. I'm not convinced with any of the ideas so I'm searching for some enlightening. Do you people have any better ideas?

    Read the article

  • command line find/replace help

    - by Chrisbloom7
    I've got a set of 5000+ files that I need to do a simple search and replace in. I have been doing it in a text editor (EditPlus) by opening 500 files at a time, doing a global search/replace, saving all, closing, etc. But, that's taking literally hours to do and it's boring and tedious and I already have done it once today and need to do it again because all the files got refreshed. Is there a way to do this via the Bash command line? Here's the details: Find onchange="document.location ='/products/view.html/view/'+this.value" Replace it with onchange="alert('Not implemented')" style="display: none" All of the files have a .HTM extension, but they are nested in several sub directories.

    Read the article

  • XPath & XML EDI B2B

    - by PearlFactory
    GoodToGo :) Best XML Editor is Altova XMLSpy 2011 http://www.torrenthound.com/hash/bfdbf55baa4ca6f8e93464c9a42cbd66450bb950/torrent-info/Altova-XMLSpy-Enterprise-Edition-SP1-2011-v13-0-1-0-h33t-com-Full For whatever reason Piratebay has trojans and other nasties..Search in torrent.eu for Altova XMLSpy Enterprise Edition SP1 2011 v13.0.1.0 Also if you like the product purchase it in a Commercial Enviroment Any well structured/complex XML can be parsed @ the speed of light using XPATH querys and not the C# objects XPathNodeIterator and others etc ....Never do loops  or Genirics or whatever highlevel language technology. Use the power of XPATH i.e Will use a Simple (Do while) as an example. We could have many different techs all achieveing the same result Instead of   xmlNI2 = xmlNav.Select("/p:BookShop")         if (xmlNI2.Count != 0)            {                                 while ((xmlNI2.MoveNext()))              string aNode =xmlNI2.SelectSingleNode('Book', nsmgr); if (aNode =="The Book I am after")       Console.WriteLine("Found My Book);   This lengthy cumbersome task can be achieved with a simple XPATH query Console.WriteLine((xmlNavg.SelectSingleNode("/p:BookShop/Book[.='The Book I am aFter ']", nsmgr)).Value.ToString()); Use the power of the parser and eliminate the middleman C#/MSIL/JIT etc etc Get Started Fast and use the parser as Outlined 1) Open XML and goto Grid Mode 2) Select XPATH tab on the bottom viewer/window as shown  From here you get intellisense and can quickly learn how to navigate/find the data using XPATH A key component to Navigation with XPATH is to use the "../ " command . This basically says from where I am now go up 1 level . With Xpath all commands are cumalative. i.e you can search for a book title @ the 2nd level of the XML and from there traverse 15 layers to paragraphs or words on a page with expression validation occuring throughout this process etc  (So in essence you may have arrived @ a node within the XML and have met 15 conditions along the way ) Given 1-2 days with XmlSpy and XPATH you unlock a technology that is super fast and simple to use. XML is a core component to what lays under the hood of so many techs. So it is no wonder that you want to be able to goto  the atomic level to achieve the result you want Justin P.S For a long time I saw XML as slow and a bit boring but now converted

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >