Search Results

Search found 5233 results on 210 pages for 'records'.

Page 114/210 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Can one have multiple name servers that don't all belong to the same TLD/provider?

    - by Simon
    In light of the GoDaddy outage we updated our name server list for our domain to include an additional name server provider. The list looks something like this: ns61.domaincontrol.com ns54.domaincontrol.com ns1.dreamhost.com ns2.dreamhost.com Both Godaddy and Dreamhost have zone entries to handle the A and MX records. The idea is that if one provider goes out the other will be a fall-back. However, when I tested my config with http://www.intodns.com/ I am getting a warning about SOA serials not being agreed. Have I misunderstood some fundamentals in name-server config? What can I do to prevent future problems?

    Read the article

  • Oracle LogMiner (Unsupported Operation)

    - by Sarith
    Hi all, I'm currently using Oracle 11g on RHEL5. I'm digging to see what are in the archived log files. After I query from v$logmnr_contents, I see many transactions of UNSUPPORTED operation. What do these unsupported transactions mean? I think that it's the cause that make my database generates lots of archived logs. Moreover, I'm using global temporary table for generating reports. I discover that when I insert and delete from those temporary table, it also records in the archived log file. How to do to reduce those recorded transactions? Regards, Sarith

    Read the article

  • The Boss Answer: What is a relational database?

    - by kce
    I'm mostly a system administrator and I don't directly work with databases other than installing them, setting up accounts, granting privileges, and so on. I realized that if The Boss walked up to me and asked, "What is a relational database?" I probably couldn't give a satisfactory answer... I'd maybe mumble something about data being stored and organized by categories which you can query with a special programing language (i.e., SQL). So could someone give a good "Boss Answer" for what a relational database is? And maybe how its different than just storing data on a file server? Bonus points for clever but accessible analogies and explaining tables, columns, records and fields. I'd define a "Boss Answer" as a quick one (maybe two) paragraph explanation for non-technical folks... mostly your Boss, on those rare occasions they actually ask you what it is you do all day.

    Read the article

  • Web based library management (FOSS) application

    - by pgb
    I'm looking for a FOSS web-based library management application. I want to manage the few books we have at our company, and I'm looking for applications that provide a simple web interface to: Add new books. Search for books. (Not Required) Place holds on books. My ideal would be something like Delicious Library, but web based and open source. I've looked at Koha (which I could not install), Emilda, which seems to be a dead project (last emails are from 2006 and I could not install it) and VuFind which looks awesome but doesn't provide (or I could not find) an interface to add new records to the library. Any suggestions?

    Read the article

  • In MySQL 5.1 InnoDB, does the maximum length of a VARCHAR affect secondary index size?

    - by e_tothe_ipi
    Assuming the data is the same either way, does the maximum length of the VARCHAR affect the space usage of a secondary index? Does InnoDB use fixed length records for indexes? Assume that we're talking about MySQL 5.1, with the InnoDB COMPRESSED table format and that the field in question is defined as a VARCHAR with some length less than or equal to 255 (so that it uses only one byte for the offset). Here is the use case: I have a server with a very large table (several gigabytes). One of the fields is currently VARCHAR(7). We need it a little longer and we are thinking of making it VARCHAR(255), but we are worried that it bloat the index.

    Read the article

  • MySQL Query GROUP_CONCAT Over Multiple Rows

    - by PeteGO
    I'm getting name and address data out of generic question / answer data to create some kind of normalised reporting database. The query I've got uses group_concat and works for individual sets of questions but not for multiple sets. I've tried to simplify what I'm doing by using just forename and surname and just 3 records, 2 for 1 person and 1 for another. In reality though there are more than 300,000 records. Example of results with qs.Id = 1. QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones Example of results with qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 3 Bob,Bob,Frank Jones,Jones,Smith What I would like to see for qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones 2 Bob Jones 3 Frank Smith So how can I make the 2nd example return a separate row for each set of name and address information? I realise the current way the data is stored is "questionable" but I cannot change the way the data is stored. I can get sets of individual answers but not sure how to combine the others. My simplified Schema that I cannot change: CREATE TABLE StaticQuestion ( Id INT NOT NULL, StaticText VARCHAR(500) NOT NULL); CREATE TABLE Question ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE StaticQuestionQuestionLink ( Id INT NOT NULL, StaticQuestionId INT NOT NULL, QuestionId INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE Answer ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE QuestionSet ( Id INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE QuestionAnswerLink ( Id INT NOT NULL, QuestionSetId INT NOT NULL, QuestionId INT NOT NULL, AnswerId INT NOT NULL, StaticQuestionId INT NOT NULL); Some example data for only forename and surname. INSERT INTO StaticQuestion (Id, StaticText) VALUES (1, 'FirstName'), (2, 'LastName'); INSERT INTO Question (Id, Text) VALUES (1, 'What is your first name?'), (2, 'What is your forename?'), (3, 'What is your Surname?'); INSERT INTO StaticQuestionQuestionLink (Id, StaticQuestionId, QuestionId, DateEffective) VALUES (1, 1, 1, '2001-01-01'), (2, 1, 2, '2008-08-08'), (3, 2, 3, '2001-01-01'); INSERT INTO Answer (Id, Text) VALUES (1, 'Bob'), (2, 'Jones'), (3, 'Bob'), (4, 'Jones'), (5, 'Frank'), (6, 'Smith'); INSERT INTO QuestionSet (Id, DateEffective) VALUES (1, '2002-03-25'), (2, '2009-05-05'), (3, '2009-08-06'); INSERT INTO QuestionAnswerLink (Id, QuestionSetId, QuestionId, AnswerId, StaticQuestionId) VALUES (1, 1, 1, 1, 1), (2, 1, 3, 2, 2), (3, 2, 2, 3, 1), (4, 2, 3, 4, 2), (5, 3, 2, 5, 1), (6, 3, 3, 6, 2); Just in case SQLFiddle is down here are the 3 queries from the examples I've linked to: 1: - working query but only on 1 set of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1)) x) y 2: - working query but undesired results on multiple sets of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1, 2, 3)) x) y 3: - working query on multiple sets of data only on 1 field (answer) though. SELECT qs.Id AS QuestionSet, a.Text AS Answer FROM QuestionSet qs INNER JOIN QuestionAnswerLink qalink ON qs.Id = qalink.QuestionSetId INNER JOIN StaticQuestionQuestionLink sqqlink ON qalink.QuestionId = sqqlink.QuestionId INNER JOIN Answer a ON qalink.AnswerId = a.Id WHERE sqqlink.StaticQuestionId = 1 /* FirstName */ AND sqqlink.DateEffective = (SELECT DateEffective FROM StaticQuestionQuestionLink WHERE StaticQuestionId = 1 AND DateEffective <= qs.DateEffective ORDER BY DateEffective DESC LIMIT 1)

    Read the article

  • Do I need to buy Mysql cluster enterprise edition?

    - by Arman
    Hello, we have a ms-sql 2008 standard edition. The db became too huge, about 8 10^9 records.the db files are about 4.5tb each. We cannot effort us to get enterprise edition to slice the database. We need partitioning. So the idea is to use Mysql cluster with many datanodes. We already started to move data. I wondered do we need to buy a licens for mysqlcluster?are there performance difference between community edition and commercial one? Thanks Arman.

    Read the article

  • Cannot create a new domain in an existing active directory forest

    - by Mackenzie Carr
    I have a domain controller setup on Windows Server 2008 R2 (Forest) and I have another Windows Server 2008 R2 (New Domain) and I want to create a new domain in an existing forest. I get the following error: An Active Directory domain controller for the domain mackdev.mackenziecarr.com could not be contacted The error was "no records found for the given DNS query" The query was for the SRV record for: _ldap._tcp.dc._msdcs.mackdev.mackenziecarr.com I've seem to have tried everything even tried adding this record to the DNS server of the primary forest. I even successfully joined this server to the domain without any issues but trying to create a new domain under the existing forest is no luck. The primary forest I.P. address is 192.168.2.20 the server that I am using to try to make a child domain is 192.168.2.21 My ipconfig are as follows: I.P. Address: 192.168.2.21 Subnetmask: 255.255.255.0 Gateway: 192.168.2.1 Primary DNS: 192.168.2.20

    Read the article

  • www a-record vs cname-record

    - by Sorin Buturugeanu
    Hi! I have a website that I will be hosting DNS for (testing purposes at first and then it will have some limited traffic). I have set up DNS so that site.tld has an A record to the actual IP but I don't know what to do about www.site.tld. Both site.tld and www.site.tld will point to the same server / application so my logic tells me to add a cname record so that www.site.tld becomes an alias for site.tld, BUT, I've been checking my settings with intodns.com and if I only add a CNAME for the www.site.tld it gives me the following error: ERROR: I could not get any A records for www.cexa.ro! (the error clears once I do an A record for www.site.tld to point to the actual IP) I don't know if there is a "rule" that "www." should always be an A record even though it's actually pointing to the same IP / application. Thanks for helping me understand this!

    Read the article

  • Sync a specific folder of contacts to iPhone

    - by colemanm
    Is there a simple way to organize contacts in Outlook/Entourage and only have a subset of them synchronize with the iPhone over Exchange ActiveSync? Our CEO has thousands of contacts in his mailbox, but would prefer if only a small portion of them synched to his phone over the air... The iPhone's performance takes a huge hit keeping that massive dataset in order. If he could put some of the records in subfolders or something and only sync the top level, I think that would work for him. Does anyone know if this is possible?

    Read the article

  • Why after deleting a 110+ GB collection, my /var/lib/mongodb directory still have same size?

    - by tunnuz
    I am having some troubles with MongoDB and space usage. In particular, I once used to have a large collection of about 600 million records totaling 110+ GB on disk. Recently I decided to drop it because the data was outdated, to do so I dropped the collection through rockmongo's web interface. Accordingly, rockmongo doesn't show me the collection anymore, however my disk usage hasn't changed at all. Is there any clean operation which I am not aware of, which must be run in order to synchronize the database with database files on disk? I have tried to perform a "repair" but the system complains that there's not enough space on disk ... that's because it is all used by MongoDB.

    Read the article

  • How do I set up failover for a single web server using two ISPs?

    - by Travis
    I have one web server and two WAN connections (1 cable, 1 DSL). DNS is run offsite, and points to the IP address assigned by one of the ISPs. How can I have the second connection take over when the primary one fails? I have seen that it is possible to have two A records, each pointing to a different IP, but it has several problems. What's the real solution to this? I imagine this is a very common issue. Thanks in advance!

    Read the article

  • PowerDNS 3+ - Recursive queries for subdomains

    - by PDNS Troubles
    We are trying to find functionality in the PDNS 3.x that existed in PDNS < 2.9.2.5. Whereby if we have a domain in the database backend with records, if a query is unable to resolve a subdomain it would then query the recursor setup in the pdns.conf file. We have found that on Centos 6.x the rpm packages are the latest verison of pdns where by 5.x available was pdns-2.9.22-4.el5. The pdns-2.9.22-4.el5 package works as expected but when upgrading servers to Centos 6.x we loose this required functionality. pdns-backend-mysql-2.9.22-4.el5.rpm fails to install on Centos 6.x due to mysql libs that aren't availble, this is caused by an upgrade in the mysql version whereby the pdns backend mysql requires older mysql libs then what is available on centos 6.x . Installing from source is also troublesome with the following errors - http://pastebin.com/B5cUuD08

    Read the article

  • Failed reverse DNS and SPF only when using Thunderbird!

    - by TruMan1
    I have a reverse DNS and SPF records correctly setup for my mail server. Sending webmail from it works perfect. The problem is when Thunderbird sends out emails, it is using the client's IP address for the hostname. I have SMTP authentication and specified my mail server's as the outgoing SMTP. Mail is being sent, but it is not "signing" the email with the mail server's IP address.. it is using the client's. Is there any way to fix this? This is the spam error I get when sending from Thunderbird: Spam: Reverse DNS Lookup, SPF_SoftFail

    Read the article

  • Setting up Ubuntu Server on Amazon EC2 for hosting multiple domains with wildcard subdomains

    - by Ashish Kumar
    I'm trying to set up multiple domains on my Amazon EC2 micro instance running Ubuntu Server 12.04. I installed Apache correctly and set up virtual hosts but having problems with wildcard subdomains. This is what my httpd.conf file looks like NameVirtualHost *:80 <VirtualHost *:80> UseCanonicalName Off VirtualDocumentRoot /home/username/domains/%0/html/ </VirtualHost> My DNS records (on Amazon Route 53) are: domain.tld A 1.2.3.4 *.domain.tld A 1.2.3.4 If i create a test.domain.tld directory with the html subdirectory, it works fine. But what I want to do is to redirect *.domain.tld to domain.tld in case there is no directory for the sub-domain accessed. I would also like www.domain.tld to redirect to domain.tld. The system should also work if I decide to host another website, example.com, on the server. I tried Googling a lot but without any luck. Suggestions?

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • DNS management for WHMCS / cPanel?

    - by RD.
    I currently have WHMCS (www.whmcs.com) that I use for the billing system for my hosting company. It integrates with cPanel and WHM. I want to be able to allow clients to: Change MX Records Change DNS info (CNAME, A Record etc) 1 can be done in cPanel. But 2 not. So my questions are: 1. Is there a program/application available that will give clients access to this? I.e. So they can change their DNS info? 2. How do other companies do it?

    Read the article

  • Predictive vs Least Connection Load Balancing Techniques

    - by Mani
    I have a windows based desktop application that communicates via TCP to the application servers. (windows 2003). No sticky sessions between client calls. We have exactly 2 servers to load balance and we are thinking to use a F5 hardware NLB. The application is a heavy load types, doing not much bussiness logic in the services but retrieving quite a big amount of data at most of the times. May be on an average 5000 to 10000 records at all times. Used mainly for storing and retirieving data and no special processing of data or calculations running on the server side. I am favouring 'predictive' considering my services take a while at times to return data and hence tracking the feedback would yield some better routing as in predictive. I am not sure if the given data is sufficient enough to suggest some ideas but considering these, what would be some suggestions\things to consider\best between Predictive and Least Connections ? Thanks.

    Read the article

  • linux dns server

    - by Clear.Cache
    Can someone explain to me how to easily setup a centos 5 (64bit) dns server? I want to use this strictly for dns for my clients who require rdns (ptr) for their domains. I do have IP delegation/authority from the data center and allocated IPs directly from ARIN. I just want to setup a Centos 5 box to use strictly as a dns server, perhaps with redundancy with a secondary, clustered (or not) dns server Server 1 = dns1.mycompany.com Server 2 = dns2.mycompany.com Then, I need simply instructions on how to create rdns records for clients upon request, especially in bulk amounts. Thank you.

    Read the article

  • SQL Server Snapshot Replication Subscriber (Editable or Read-Only)

    - by NateReid
    I need to create a copy of my SQL 2008 R2 Enterprise database and have it located on the same server as the original. I will be using this second copy of the database as the target of a mostly read-only website. I understand that if I create this copy of the database using snapshot replication that all data changes in the subscriber database will be overwriten in the event of the next replication. The web application will try to write to this database to record login attempts, etc and will fail if its source database is read-only. In my case I do not need to keep these auditing records and they can therefore be overwriten each time a new snapshot is applied. My question is whether SQL Server forces the subcriber database to be read-only and is there any way around this? Thank you, Nate

    Read the article

  • Load balancing SMTP in a way that doesn't hide the source IP address

    - by makerofthings7
    I need to load balance SMTP to handle some applications that don't know how to use MX records. I set up a Netscaler using the TCP option on port 25 and now Exchange sees the source IP as that of the DMZ of the Netscaler for every connection, not the client. Obviously this prevents RBLs, Whitelists, and all other IP-based reputation to fail. It also make it impossible to whitelist a trusted IP for anonymous relay. Question How should I configure the NetScaler (or Windows Load Balancing) so that I can allow load balancing yet still maintain visibility of the source IP?

    Read the article

  • send and recieve email with custom domain address, using gmail

    - by ahmad598
    i've registered a domain, and went to create a google apps account with that. but google told me they don't accept .ir domains, possibly trying to prevent us from building bombs (which completely makes sense), or simply lack of interest. anyway. so now i'm seeking for a way to use gmail, but with my own domain name, all without using google apps. i've added my domain to a free dnspark.net account, and it doesn't have mail forwarding, but i can set MX records. is there anyway i could accomplish my task? or is it completely impossible?

    Read the article

  • BIND - zone not loaded du to errors

    - by Johan Barelds
    After upgrading from Ubuntu 8.04 to 10.04 my DNS isn't working properly anymore. I keep getting this error when I run named-checkzone example.com /var/cache/bind/example.com.zone.db zone example.com/IN: NS 'mx002a.example.com' has no address records (A or AAAA) zone example.com/IN: not loaded due to errors. in /var/cached/bind/example.com.db $TTL 3D @ IN SOA mx002a.example.com. chantra.example.com. ( 200608081 ; serial, todays date + todays serial # 8H ; refresh, seconds 2H ; retry, seconds 4W ; expire, seconds 1D ) ; minimum, seconds ; ; mx002a.example.com IN A 192.168.85.19 example.com. IN NS mx002a.example.com. mx001 60 IN A 192.168.85.17 mx001 60 IN A 192.168.85.18

    Read the article

  • Software available for singing "lessons" via computer microphone?

    - by drozzy
    Looking for a software to help my friend learn to sing. Can't seem to find anything on googles. Does there exist software (preferably downloadable and from this century!) that records one's voice and then analyzes it to see how "accurate" it was. It would be great if it also had some kind of "lessons" of some sort, and not simply sound recorder that shows waveforms. I can't imagine it would be so hard to implement, and there probably is one out there - I just can't find it. Any recommendations are welcome. Thanks.

    Read the article

  • SQL Server Column Level Encryption - Rotating Keys

    - by BarDev
    We are thinking about using SQL Server Column (cell) Level Encryption for sensitive data. There should be no problem when we initially encryption the column, but we have requirements that every year the Encryption Key needs to change. It seems that this requirement may be problem. Assumption: The table that includes the column that has sensitive data will have 500 million records. Below are the steps we have thought about implementing. During the encryption/decryption process is the data online, and also how long would this process take? Initially encrypt the column New Year Decrypt the column Encrypt the column with new key. Question : When the column is being decrypted/encrypted is the data online (available to be query)? Does SQL Server provide feature that allows for key changes while the data is online? BarDev

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >