Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 262/858 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • PowerDNS: multiple supermasters and transfering domain

    - by blauwblaatje
    Hi, I've got a setup with multiple supermasters (bind) and multiple superslaves (pdns). It all seems to work just fine, pdns is being updated when I'm adding or changing a domain. But, when I want to migrate a domain from one master to another, pdns doesn't like it. It tells me the new server isn't a master for this domain, although I deleted the domain on the old server. Now, I think that part of the problem is, that pdns doesn't get an update when a domain is deleted, which would also explain a lot of dead domains in my pdns. It looks like the slave is constantly polling a server and getting RCODE=5 back. The master isn't aware of the domain and the slave thinks the master still serves that domain. Anyone familiar with this problem?

    Read the article

  • Executing javascript script after ajax-loaded a page - doesn't work

    - by Deukalion
    I'm trying to get a page with AJAX, but when I get that page and it includes Javascript code - it doesn't execute it. Why? Simple code in my ajax page: <script type="text/javascript"> alert("Hello"); </script> ...and it doesn't execute it. I'm trying to use Google Maps API and add markers with AJAX, so whenever I add one I execute a AJAX page that gets the new marker, stores it in a database and should add the marker "dynamically" to the map. But since I can't execute a single javascript function this way, what do I do? Is my functions that I've defined on the page beforehand protected or private? ** UPDATED WITH AJAX FUNCTION ** function ajaxExecute(id, link, query) { if (query != null) { query = query.replace("amp;", ""); } if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else { // code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { if (id != null) { document.getElementById(id).innerHTML=xmlhttp.responseText; } } } if (query == null) { xmlhttp.open("GET",link,true); } else { if (query.substr(0, 1) != "?") { xmlhttp.open("GET",link+"?"+query,true); } else { xmlhttp.open("GET",link+query,true); } } xmlhttp.send(); }

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Why does my Doctrine DBAL query return no results when quoted?

    - by braveterry
    I'm using the Doctrine DataBase Abstraction Layer (DBAL) to perform some queries. For some reason, when I quote a parameter before passing it to the query, I get back no rows. When I pass it unquoted, it works fine. Here's the relevant snippet of code I'm using: public function get($game) { load::helper('doctrinehelper'); $conn = doctrinehelper::getconnection(); $statement = $conn->prepare('SELECT games.id as id, games.name as name, games.link_url, games.link_text, services.name as service_name, image_url FROM games, services WHERE games.name = ? AND services.key = games.service_key'); $quotedGame = $conn->quote($game); load::helper('loghelper'); $logger = loghelper::getLogger(); $logger->debug("Quoted Game: $quotedGame"); $logger->debug("Unquoted Game: $game"); $statement->execute(array($quotedGame)); $resultsArray = $statement->fetchAll(); $logger->debug("Number of rows returned: " . count($resultsArray)); return $resultsArray; } Here's what the log shows: 01/01/11 17:00:13,269 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 17:00:13,269 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 17:00:13,270 [2112] DEBUG root - Number of rows returned: 0 If I change this line: $statement->execute(array($quotedGame)); to this: $statement->execute(array($game)); I get this in the log: 01/01/11 16:51:42,934 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 16:51:42,935 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 16:51:42,936 [2112] DEBUG root - Number of rows returned: 1 Have I fat-fingered something?

    Read the article

  • How to query JDO persistent objects in unowned relationship model?

    - by Paul B
    Hello, I'm trying to migrate my app from PHP and RDBMS (MySQL) to Google App Engine and have a hard time figuring out data model and relationships in JDO. In my current app I use a lot of JOIN queries like: SELECT users.name, comments.comment FROM users, comments WHERE users.user_id = comments.user_id AND users.email = '[email protected]' As I understand, JOIN queries are not supported in this way so the only(?) way to store data is using unowned relationships and "foreign" keys. There is a documentation regarding that, but no useful examples. So far I have something like this: @PersistenceCapable public class Users {     @PrimaryKey     @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)     private Key key;     @Persistent     private String name;         @Persistent     private String email;         @Persistent     private Set<Key> commentKeys;     // Accessors... } @PersistenceCapable public class Comments {     @PrimaryKey     @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)     private Key key;     @Persistent     private String comment;         @Persistent     private Date commentDate;     @Persistent     private Key userKey;     // Accessors... } So, how do I get a list with commenter's name, comment and date in one query? I see how I probably could get away with 3 queries but that seems wrong and would create unnecessary overhead. Please, help me out with some code examples. -- Paul.

    Read the article

  • How to recover from failed Mysql schema update, with replication?

    - by OmerGertel
    I have two MySQL servers configured with master-slave replication. Before we deploy a new application version we: 1) STOP SLAVE 2) Take a MySQL dump of the slave. However, if a mistake is done during the deployment of the new schema version (a table is dropped by mistake, for example), having the slave intact doesn't help. Our service is write-intensive, so we can't turn it back up until we have a master working. If we now load the mysql dump back into the master, it will take a long time during which our service remains down. What is the best-practice to recover from such a mistake? How can I setup the system so I can easily promote the slave, turn on our service and only then tend to the broken database? Mainly, I'm worried with re-syncing the slave and the master after changes are done on the slave.

    Read the article

  • How do I add ROW_NUMBER to a LINQ query or Entity?

    - by Whozumommy
    I'm stumped by this easy data problem. I'm using the Entity framework and have a database of products. My results page returns a paginated list of these products. Right now my results are ordered by the number of sales of each product, so my code looks like this: return Products.OrderByDescending(u => u.Sales.Count()); This returns an IQueryable dataset of my entities, sorted by the number of sales. I want my results page to show the rank of each product (in the dataset). My results should look like this: Page #1 1. Bananas 2. Apples 3. Coffee Page #2 4. Cookies 5. Ice Cream 6. Lettuce I'm expecting that I just want to add a column in my results using the SQL ROW_NUMBER variable...but I don't know how to add this column to my results datatable. My resulting page does contain a foreach loop, but since I'm using a paginated set I'm guessing using that number to fake a ranking number would NOT be the best approach. So my question is, how do I add a ROW_NUMBER column to my query results in this case?

    Read the article

  • How to set up Heartbeat to run a service only at one node

    - by Jon Skarpeteig
    I have two Ubuntu 12.04 servers, which run mysql in a master-master setup, with mmm as manager. How can I set up heartbeat to make sure that mmm only runs at one node at the time? *Edit to explain more clearly My setup: ---------VIP (10.0.0.123)------ | | Node1 Node2 Where bot Node1 and Node2 run: Mysql Multi-Master Replication Manager for MySQL (mmm) Heartbeat I only want a single write enabled mysql node, and I can only have one mmm running at the time, else I'll get collision between the managers.

    Read the article

  • SQL Query: How to determine "Seen during N hour" if given two DateTime time stamps?

    - by efess
    Hello all. I'm writing a statistics application based off a SQLite database. There is a table which records when users Login and Logout (SessionStart, SessionEnd DateTimes). What i'm looking for is a query that can show what hours user have been logged in, in sort of a line graph way- so between the hours of 12:00 and 1:00AM there were 60 users logged in (at any point), between the hours of 1:00 and 2:00AM there were 54 users logged in, etc... And I want to be able to run a SUM of this, which is why I can't bring the records into .NET and iterate through them that way. I've come up with a rather primative approach, a subquery for each hour of the day, however this approach has proved to be slow and slow. I need to be able to calculate this for a couple hundred thousand records in a split second.. SELECT case when (strftime('%s',datetime(date(sessionstart), '+0 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionend)) then 1 else 0 end as hour_zero, ... hour_one, ... hour_two, ........ hour_twentythree FROM UserSession I'm wondering what better way to determine if two DateTimes have been seen durring a particular hour (best case scenario, how many times it has crossed an hour if it was logged in multiple days, but not necessary)? The only other idea I had is have a "hour" table specific to this, and just tally up the hours the user has been seen at runtime, but I feel like this is more of a hack than the previous SQL. Any help would be greatly appreciated!

    Read the article

  • Subdocument in Word won't save

    - by ChrisW
    Because I know Word has a history of not liking very large documents (my supervisor specifically told me not to use LaTeX... grr), I decided to learn the Master document / subdocument feature of Word when writing my PhD thesis. I have the title page / table of contents etc in the master document, and each chapter as a separate document. However, when I save the master document, it appears to save all the chapter documents apart from one (Chapter 4), for which it brings up the Save Document dialog box, helpfully with "Chapter4.docx" in the "Save as" box (n.b. Chpater4.dox is not open). Clicking save does nothing, and doesn't make the dialog box go away. Saving as a different document means that my changes aren't reflected in the same document. There must be some reason Word doesn't like this particular document but I've got no idea why - there's nothing special in it that isn't in any of the other chapters. I have tried closing all documents, renaming Chapter4.docx, opening the master document, expanding all documents, OKing the warning that Chapter4.dox does not exist, and inserting the 'new' document, but even when I save the master document it still won't save the new Chapter4 document. If anyone knows any reason why Word is acting like this (or if I'm doing anything stupid), I'll be eternally grateful (p.s. sorry for the long rambling message. It's late; I've been working on my PhD 4.5 years, I really really want to throw this computer out the window, and I hope people are kind enough not to downvote this question because of it's rambling nature!) Update With Word closed, I've tried to delete Chapter4.docx (having made a backup!) - but I get a warning that it can't be deleted because it's open in Microsoft Word... these files are on a network drive and the same problems are happening on 2 different computers. I could login to the filestore through ssh and force the file to be deleted, but I'm curious to know why this is happening!

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

  • Python Twisted Client Connection Lost

    - by MovieYoda
    I have this twisted client, which connects with a twisted server having an index. I ran this client from command-line. It worked fine. Now I modified it to run in loop (see main()) so that I can keep querying. But the client runs only once. Next time it simply says connection lost \n Connection lost - goodbye!. What am i doing wrong? In the loop I am reconnecting to the server, it that wrong? from twisted.internet import reactor from twisted.internet import protocol from settings import AS_SERVER_HOST, AS_SERVER_PORT # a client protocol class Spell_client(protocol.Protocol): """Once connected, send a message, then print the result.""" def connectionMade(self): self.transport.write(self.factory.query) def dataReceived(self, data): "As soon as any data is received, write it back." if data == '!': self.factory.results = '' else: self.factory.results = data self.transport.loseConnection() def connectionLost(self, reason): print "\tconnection lost" class Spell_Factory(protocol.ClientFactory): protocol = Spell_client def __init__(self, query): self.query = query self.results = '' def clientConnectionFailed(self, connector, reason): print "\tConnection failed - goodbye!" reactor.stop() def clientConnectionLost(self, connector, reason): print "\tConnection lost - goodbye!" reactor.stop() # this connects the protocol to a server runing on port 8090 def main(): print 'Connecting to %s:%d' % (AS_SERVER_HOST, AS_SERVER_PORT) while True: print query = raw_input("Query:") if query == '': return f = Spell_Factory(query) reactor.connectTCP(AS_SERVER_HOST, AS_SERVER_PORT, f) reactor.run() print f.results return if __name__ == '__main__': main()

    Read the article

  • Dye Sub printer with specific prints remaining - can I command-line query this?

    - by Jason N
    Hey, I've got a Sony Dye-Sub printer that holds ink/paper sets - i.e. a very certain amount of ink and paper for ~200 prints. This information is available to me from within Control Panel Printers Preferences Printer Device Information (i.e. current 189 remaining prints). Any way I can perhaps get this information from the command line? I'd like to write a little program to tell me when the number of prints gets low (i.e. < 20), rather than suffer the annoying Windows "run out of paper" popup. I've found the Windows VBScript print utilities, but can't seem to find the request I need for this. Any suggestions? Jason

    Read the article

  • DNS Replication on Server 2008 R2

    - by Aaron
    Hi There, I have been trying out public only facing DNS servers with Server 2008 R2 Web - I've wanted to setup at least 2 in a master/slave replication. Using Microsoft DNS I am able to add in the domains into the primary zone on the master DNS server (ns1) and add the records ok and have them visible publically. On ns2 I can then add in the same domain but as a secondary zone and get them to replicate / zone transfer fine. Is there a way inside of Windows to have the slave(s) automatically synchronise all the changes from the master? For example it's ok if i have manually added the domains onto each of the NS's but if i add a new zone on the master i have to add it on the slave before it replicates. I installed Simple DNS and they have a 'Super Master/Slave' which takes care of exactly this whereby if you add a new domain into the primary zone it is automatically created and kept in sync on NS2 but i would have to buy a licence. All this is non active directory if that helps. Can anyone advise if it is possible to do this using Microsoft DNS? Many Thanks in Advance!

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • T-SQL for autogrowth of multiple data files

    - by ddono25
    I can't seem to figure out the problems with my script to alter SQL Server 2008 database and file growth. There are two data files and a log file, all which need to have Autogrowth ON. Does this look completely wrong? Thanks! USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE, FILENAME = "H:\MSSQL\Data\BigDB.mdf", MAXSIZE = UNLIMITED, FILEGROWTH = 2000MB) USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE1, FILENAME = "K:\MSSQL\Data\BigDB_data1.ndf", MAXSIZE = UNLIMITED, FILEGROWTH = 2000MB) USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE_log, FILENAME = "O:\MSSQL\Data\BigDB_log.ldf", MAXSIZE = UNLIMITED, FILEGROWTH = 200MB) GO

    Read the article

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • LogParser query to grab only external IP addresses from IIS logs?

    - by Josh
    I'm working on a public website that is used by both external visitors and internal employees. I'm after the external visitor hits, but I can't think of a good way to filter out the internal IP ranges. Using LogParser, what is the best way to filter IISW3C logs by IP range? This is all I've come up with so far, which can't possibly be the best or most efficient way. WHERE [c-ip] NOT LIKE (10.10.%, 10.11.%) Any help is appreciated.

    Read the article

  • Mysql replication, Slow resyncing of slave after an error

    - by James Hackett
    I have a slave that got an error about a months or so ago and got way behind the master. I fixed the error and now playing catchup with the master but its going very slowly. Its going at 1.3x real time. I was using less that 10% of the db resources when these writes were first happening so the speed of the server shouldn't be an issue. Is there any settings I can switch to help the slave catch up with the master?

    Read the article

  • Can I join two tables whereby the joined table is sorted by a certain column?

    - by Ferdy
    I'm not much of a database guru so I need some help on a query I'm working on. In my photo community project I want to richly visualize tags by not only showing the tag name and counter (# of images inside them), I also want to show a thumb of the most popular image inside the tag (most karma). The table setup is as follow: Image table holds basic image metadata, important is the karma field Imagefile table holds multiple entries per image, one for each format Tag table holds tag definitions Tag_map table maps tags to images In my usual trial and error query authoring I have come this far: SELECT * FROM (SELECT tag.name, tag.id, COUNT(tag_map.tag_id) as cnt FROM tag INNER JOIN tag_map ON (tag.id = tag_map.tag_id) INNER JOIN image ON tag_map.image_id = image.id INNER JOIN imagefile on image.id = imagefile.image_id WHERE imagefile.type = 'smallthumb' GROUP BY tag.name ORDER BY cnt DESC) as T1 WHERE cnt > 0 ORDER BY cnt DESC [column clause of inner query snipped for the sake of simplicity] This query gives me somewhat what I need. The outer query makes sure that only tags are returned for which there is at least 1 image. The inner query returns the tag details, such as its name, count (# of images) and the thumb. In addition, I can sort the inner query as I want (by most images, alphabetically, most recent, etc) So far so good. The problem however is that this query does not match the most popular image (most karma) of the tag, it seems to always take the most recent one in the tag. How can I make sure that the most popular image is matched with the tag?

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • Something confusing about FormsOf (Sql server Full-Text searching)

    - by AspOnMyNet
    hi I'm using Sql Server 2008 1) A given <simple_term> within a <generation_term> will not match both nouns and verbs. If I understand the above text correctly, then query SELECT * FROM someTable WHERE CONTAINS ( *, ' FORMSOF ( INFLECTIONAL, park ) ' ) should search for either nouns or verbs derived from the root word “park”, but not for both? Thus out of the two rows, one containing noun parks and other verb parking, the above query should return just one of the two rows? But as it turns out, query returns both rows, so are perhaps my assumptions a bit off or is the above quote wrong?! 2) From Msdn: If freetext_string is enclosed in double quotation marks, a phrase match is instead performed; stemming and thesaurus are not performed. According to the above quote the following query shouldn’t return rows containing strings surfing ( due to query not performing stemming ), surf ( due to query performing phrase matching and not individual word matching ) and surfing with suzy’s sister ( due to query not performing stemming and due to query performing phrase matching and not word matching ), but it does. Thus, it appears that even when *freetext_string* is enclosed in double quotation marks, stemming is still preformed, while phrase matching is not: SELECT * FROM someTable WHERE FREETEXT( *, ' "surf sister" ' ) So is the above quote wrong or...? thanx

    Read the article

  • BIND: forward 1st level zone

    - by raven
    First of all: sorry for the language, English is not my primary language. I have star-like DNS structure with many filials (more that 2): ^ | v filialNS_1.filial_1.city.local <---- ns.main.city.local <---- filialNS_2.filial_2.city.local ^ | v ns.mail.city.local is slave of all filials zones filialNS_1 is master of filial_1.city.local filialNS_2 is master of filial_2.city.local filialNS_N is master of filial_N.city.local I want to: serve DNS queries for xxx.filial_N.city.local with filialNS_N.filial_N.city.local forward all queries for xxx.xxx.xxx.local from filialNS_N to ns.main.city.local forward other queries to our provider's DNS on filial (or google-public-dns or anything else) FILIAL CONFIG named.conf zone "filial_1.city.local" { type master; file "/etc/namedb/dynamic/filial_1.city.local"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "2.76.10.in-addr.arpa" { type master; file "/etc/namedb/dynamic/2.76.10.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "local." { type forward; forward only; forwarders { <ns.main.city.local IP address> }; }; nslookup server.filial_1.city.local - works fine nslookup server.main.city.local Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find server.main.city.local: NXDOMAIN Where am I going wrong?

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >