Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 154/216 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • Installation of Active Directory on separate VM from DNS does not entierly work - not sure why

    - by René Kåbis
    Not sure what I am doing wrong here. I have a moderately midrange server (16 cores, 2Ghz, 32GB ECC REG RAM, 6TB storage, nothing too extreme) where I am running Hyper-V (Server 2012 R2 Enterprise) in order to provision virtual machines. So why an AD separate from DNS? I want redundancy. I want to be able to move VMs and back them up individually and not have too many services on any one VM. I have already provisioned a VM with DNS, and have set it up right -- essentially, I have: Set up Static IP’s for everyone involved. Installed the DNS service on the DNS VM. Created a forward lookup zone and a reverse lookup zone (primary zone) xyz.ca Configured the zones to use nonsecure and secure dynamic updates (i will change this to secure later after the domain controller is online). Created a A record for the DC in the forward lookup zone (and a reverse ptr) Changed DC’s DNS server (network settings) to the new DNS server. Checked that I can ping the dns server from the new DC by hostname. When I went ahead and did a DCpromo on the DC, and un-cheked the “install DNS” option, everything seemed to go well (no error messages), but I saw no changes on the DNS server whatsoever (no additional settings). Plus, the DNS server seems to be unable to join the domain, as it claims that the domain is not discoverable. As a final note, I do run Symantec Endpoint Protection, which includes a firewall and most settings set as default. I have not yet tried turning this off, but my experience has been that if a service would open up a port on a Windows firewall, it would do the same through Symantec. There is pretty tight integration these days with corporate-class AV and Windows. I have a template vhdx fully set up (just short of any special roles and features) that I can use to replace the current AD VM with, so doing this all over again is not too much skin off of my nose.

    Read the article

  • Repairing inconsistent pages in database

    - by Raj
    We have a SQL 2000 DB. The server crashed due to Raid array failure. Now when we run DBCC CHECKDB, we get an error that there are 27 consistency errors in 9 pages. When we run DBCC PAGE on these pages, we get this: Msg 8939, Level 16, State 106, Line 1 Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (m_freeCnt == freeCnt) failed. Values are 2 and 19. Msg 8939, Level 16, State 108, Line 1 Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (emptySlotCnt == 0) failed. Values are 1 and 0. Since the indicated index is non-clustered and is created by a unique constarint that includes 2 columns, we tried dropping and recreating the index. This resulted in the following error: CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 2. Most significant primary key is '3280'. The statement has been terminated. However running Select var_id,result_on from tests group by var_id,result_on having count(*)>1 returns 0 rows. Here is what we are planning to do: Restore a pre-server crash copy of the DB and run DBCC CHECKDB If that returns clean, then restore again with no recovery Apply all subequent TLOG backups Stop production app, take a tail log backup and apply that too Drop prod DB and rename the freshly restored DB to make it prod Start prod app Could someone please punch holes in this approach? Maybe, suggest a different approach? What we need is minimum downtime. SQL 2000 DB Size 94 GB The table that has corrupt pages has 460 Million+ rows of data Thanks for the help. Raj

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Using 1920x1200 mode on SyncMaster T260HD in Linux

    - by dagorym
    I just got a Samsung SyncMaster T260HD monitor. It works straight out of the box with Windows but I can't seem to get it to work with Linux, which is my primary OS for day to day work. The computer boots up but when going into graphical mode on Linux the monitor gives me a "Mode not supported" error and doesn't display anything. I booted up windows and, using PowerStrip, grabbed the exact ModeLine that should be used to get the equivalent setting in Linux and added it to my xorg config file but it doesn't seem to help. the ModeLine is: ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync This is the modeline for the working display settings in windows but it doesn't seem to work in Linux My complete entry in the xorg.conf file for the monitor is Section "Monitor" Identifier "Monitor0" ModelName "SyncMaster" DisplaySize 518 324 HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "dpms" ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync EndSection I'm running Scientific Linux 5.4 (clone of Redhat Enterprise Linux 5.4) but I've tried booting with a recent Linux Mint Distro as well as Ubuntu 9.04 and had the same problem. Any suggestions on other things I should try or might be missing? If anyone's gotten this to work I'd love to know. Thanks.

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • Windows 7 Multi-NIC woes

    - by Eric
    I have Comcast business Internet here. It gives me 5 static IPs. Most of the machines in my house connect to a router like every other household. It has a 192.168.117.x subnet, DHCP Server, etc. and all is well. However, I have a second machine on MY desk that has a life Internet IP. Up until yesterday, this machine was running XP Pro. The primary NIC was manually set to 192.168.117.241 with no gateway, and the secondary NIC was manually set to 173.x.x.171 with a gateway of 173.x.x.174. This worked just fine for years. Yesterday I replaced that XP machine with a brand new Windows 7 x64 box. Again, I configured it the same way. The onboard NIC was given a static 192.168.117.x address with no gateway, and the secondary NIC was given a live Internet IP address with the proper router, etc. 2 Problems. First is that the internal network (192.168.117.x) is listed as a public network because there's no gateway, so that means no homegroup, no file sharing, none of that. And I can't change it from what I'm reading... The second is that the machine reports the "router" ip address as it's address, and not the address that it's supposed to. I'm ready to tear my hair out over this. Any ideas?

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • Linux Mint 13 is not booting on dual boot computer

    - by Brian
    thanks in advance for your time. I have 2 hard drives in my computer a 300 GB drive which is my primary drive for windows 7 and a 1.5 TB drive that I'd used for storage. When I got it I partitioned 500 GB for use in Linux. So, I created a bootable USB and clicked the "Install by Current Operating System" option from Mint. It installed it to the free 500 GB like I'd hoped it would. Now, I can't get it to boot though. I've tried using EasyBCD to create the boot entry and it hangs on a black screen. Thanks. EDIT @ Ryhuk It presents a menu with two options 1) Windows and 2) Mint. This was a menu I created with easyBCD. When I select option 1 it boots to windows fine. When I select option 2 it hangs on a black screen with just a white bar flashing (Can't remember what its called, it marks the current cursor location on a text field) and won't respond to any key presses but alt ctrl del.

    Read the article

  • Zabbix Proxy not collecting data

    - by syntaxcollector
    Hi All I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Freebsd Secondary Group not allowing folder deletion

    - by Jarrod Juleff
    TLDR: I have a user that is a member to a group as a secondary group. This user can delete files with 664 perms as a secondary user, but not directories with perms of 775. Details: I have a user. Lets call him ftpuser. I use him to upload and download files to my devbox. The user's primary group is "ftp" and is also in the group "www" as a secondary group. My web server runs as user www and group www, and I have proftpd (running as www and www) configured to drop all files into the needed directories as www and www (for file ownership) and perms 664 on files and 775 on directories. My problem is (tried with 2 ftp clients) the ftp client can delete the files, but not the folders. Filezilla returns 550 permission denied. The owner only can delete flag is not set, and I've triple checked the permissions and they are indeed 775. Its driving me nuts to have to log into my server to manually delete folders every time. Some of the folders and files are created by 1 of my php scripts, but the permissions are getting set properly when I check the files' properties. Directory and file creation works phenomenal. Can delete files, just not directories. Freebsd 9.0 Running in VirtualBox (32bit all the way around) Proftpd (running as www and www) as ftp server (tried using both dreamweaver and filezilla as the clients) Basic amp setup (apache,mysql,and php).

    Read the article

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • How to make Exchange 2003 non-authoritive

    - by Romski
    Background We are a small company with an internally hosted Exchange 2003. It receives email for 2 domains (the company was renamed a few years back). For the sake of argument, the domains are: oldname.com newname.com We have moved newname.com to a hosted exchange service, and our DNS record is correctly routing emails. Our internal server still receives email for oldname.com, although we have asked our hosting company to accept emails for that domain. Problem My problem is that emails generated internally from monitoring software, printer, etc. are being caught by our (defunct) internal server and being delivered to the old mailboxes. I believe that what is happening is that our internal exchange server considers itself to be the authoritive server for newname.com. I think it must be looking in active directory for a mailbox and delivering it internally without ever going outside. Attempt to fix I started to follow the article here: http://support.microsoft.com/kb/321721. I removed the SMTP recipient policy for newname.com, and added a dummy address and made it primary. I also answered yes for updating the associated emails. I then restarted the Microsoft Exchange Routing System and SMTP, but emails are still being routed internally. Is there a way to force the exchange server to route all emails for the domain newname.com to the new hosted service?

    Read the article

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • Debian/OVH: How to configure multiple Failover IP on the same Xen (Debian) Virtual Machine?

    - by D.S.
    I have a problem on a Xen virtual machine (running latest Debian), when I try to configure a second failover IP address. OVH reports that my IP is misconfigured and they complaint they receive a massive quantity of ARP packets from this IPs, so they are going to block my IP unless I fix this issue. I suspect there's a routing issue, but I don't know (and can't find any useful info on the provider's website, and their support doesn't provide me a valid solution, just bounce me to their online - useless - guides). My /etc/network/interfaces look like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address AAA.AAA.AAA.AAA netmask 255.255.255.255 broadcast AAA.AAA.AAA.AAA post-up route add 000.000.000.254 dev eth0 post-up route add default default gw 000.000.000.254 dev eth0 # Secondary NIC auto eth0:0 iface eth0:0 inet static address BBB.BBB.BBB.BBB netmask 255.255.255.255 broadcast BBB.BBB.BBB.BBB And the routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 000.000.000.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 0.0.0.0 000.000.000.254 0.0.0.0 UG 0 0 0 eth0 In these examples (true IP addresses are replaced by fake ones, guess why :)), 000.000.000.000 is my main server's IP address (dom0), 000.000.000.254 is the default gateway OVH recommends, AAA.AAA.AAA.AAA is the first IP Failover and BBB.BBB.BBB.BBB is the second one. I need both AAA.AAA.AAA.AAA and BBB.BBB.BBB.BBB to be publicly reachable from Internet and point to my domU, and to be able to access Internet from inside the virtual machine (domU). I am using eth0 and eth0:0 because due to OVH support, I have to assign both IPs to the same MAC address and then create a virtual eth0:0 interface for the second IP. Any suggestion? What am I doing wrong? How can I stop OVH complaining about ARP flood? Many thanks in advance, DS

    Read the article

  • Slow parity initialization of RAID-5 array on HP Smart Array 411 controller

    - by Rob Nicholson
    On 29th October 2011, I built a RAID-5 array using 4 x 146.8GB Seagate SAS ST3146855SS drives running at 15k connected to a PowerEdge R515 with HP Smart Array P411 controller running Windows 2008 (so nothing particularly unusual). I know that parity initialisation of a RAID-5 array can take some time but it's still running after 2.5 weeks which seems a little unusual. I'd previously built another array on the same controller using 4 x 2TB SATA-2 drives and that did take a while to complete but a) I'm sure it was less than 2.5 weeks, b) that array was ~12 times bigger and c) during initialization, the percentrage slowly increased each day. At the moment, the status display for this new 2nd array simply says "Parity Initialization Status: In Progress" and it's said that since the start. It's this lack of change on the status that worries me the most - feels like it's not actually doing anything. Do you think something has gone wrong or am I being unpatient and for some reason, the status not increasing is normal? I kind of expected a much smaller array on faster drives (15k SAS versus 7.5k SATA-2) to build in a few days. This is our primary SAN running StarWind so my "have a play" options are very limited. This 2nd array is currently in use for one small virtual disk so I could shut the target machine down, move the virtual disk to another drive and try rebuilding.

    Read the article

  • Running DNS locally for home network

    - by Roy Rico
    I have a small home network that just got larger ( New roommate, My existing roommate got a laptop (on top of her computer)j, my friends coming over with laptop, etc ). I'd like to run a local DNS server for lookups of my local network stuff (fileserver.local, windowsTV.local, machineA.local, machineB.local, appletv.local). I used to have a business line with a static IP, and run bind/named internally. However, now, I have a normal account. My ISP's DNS servers are constantly changing (for whatever reasons my ISP doesn't like to keep the same IP range for long). I need my local DNS to be automatically updated to use my ISP's DNS for external traffic, but be able to maintain an internal DNS server (getting to update the hosts file is being a hassle with every new machine on top of rebuilding existing machines with win7 or Ubuntu 9.04). Additionally, My ISP's DNS servers often crash or become unresponsive. Are there any open DNS servers that are reliable (i don't want to reconfig every day) that I could use as my primary, then if those fail, then use my ISP's? UPDATE: Also looking for each workstation to be able to use dhcp to connect, but instead of getting ISP dns servers, getting my internal one.... Thanks

    Read the article

  • Ubuntu Server: Networking fails with MODPROBE option in /etc/network/interfaces ... ??

    - by neezer
    For some reason (which I haven't been able to determine yet), yesterday morning the networking service on our web server (running Ubuntu 8.04.2 LTS -- hardy) wouldn't start, and our website went down. I noticed the following error message when trying to restart it: * Reconfiguring network interfaces... /etc/network/interfaces:6: option with empty value ifup: couldn't read interfaces file "/etc/network/interfaces" ...fail! Line 6 in the /etc/network/interfaces file concerned a MODPROBE command, which (I believe) loaded in the ip_conntrack_ftp module so that I could use PASV on my FTP server (vsftpd): (breaking modprobe commands commented out below) # Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or # /usr/share/doc/ifupdown/examples for more information. # The loopback network interface auto lo iface lo inet loopback #MODPROBE=/sbin/modprobe #$MODPROBE ip_conntrack_ftp pre-up iptables-restore < /etc/iptables.up.rules # The primary network interface # Uncomment this and configure after the system has booted for the first time auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask 255.255.255.0 gateway xxx.xxx.xxx.1 dns-nameservers xxx.xxx.xxx.4 xxx.xxx.xxx.5 I've verified that there is a file in /sbin called modprobe. Like I said earlier, this setup had been working flawlessly until yesterday morning (though my bosses say that the site actually went down the previous night at 11 PM EST). Can anyone shed some light on (A) why this broke, and (B) how can I re-enable the ip_conntrack_ftp module?

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Exchange 2010 Mail Enabled Public Folder Unable to Recieve External (anon) e-mail.

    - by Alex
    Hello All, I am having issues with my "Public Folders" mail enabled folders receiving e-mails from external senders. The folder is setup with three Accepted Domains (names changed for privacy reasons): 1 - domain1.com (primary & Authoritative) 2 - domain2.com (Authoritative) 3 - domain3.com (Authoritative) When someone attempts to send an e-mail to [email protected] from inside the organization, the e-mail is received and placed in the appropriate folder. However, when someone tries to send an e-mail from outside the organization (such as a gmail account), the following error message is received: "Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 Recipient address rejected: User unknown (state 14)." When I try to send an e-mail to the same folder, using the same e-mail address above ([email protected]), but with domain2.com instead of domain3.com, it works as intended (both internal & external). I have checked, double checked, and triple checked my DNS settings comparing those from domain2 & domain3 with them both appearing identical. I have tried recreating the folders in question with the same results. I have also ran Get-PublicFolderClientPermission "\Web Programs\folder" with the following results for user anonymous: RunspaceId : 5ff99653-a8c3-4619-8eeb-abc723dc908b Identity : \Web Programs\folder User : Anonymous AccessRights : {CreateItems} Domain2.com & Domain3.com are duplicates of each other, but only domain2.com works as intended. All other exchange functions are functioning properly. If anyone out there has any suggestions, I would love to hear them. I've just hit a brick wall. Thanks for all your help in advance! --Alex

    Read the article

  • New router messed up server 2003 setup...

    - by Aceth
    Hey, We were sent a new 2wire router today configured it as best we can to match the old bt voyager. We've also got X static IP's. We've manage to get our webserver on one of the new IP's public facing. then we use a hardware firewall which is in a DMZ again with a different static IP. This firewall then is our gateway for our internal LAN. with a few servers etc. The problem we're having is only our PDC (primary Domain controller which has exchange 2003 on) can't ping externally even an external IP. We've connected laptops to the 2wire router and obtain a private ip 192.168.1.X and it works fine can ping etc. our other servers with an internal ip behind the firewall can ping out fine. We've connected to the firewalls logging console and the pings from the server are allowed through so its fine there. The server in question is a Windows server 2003 R2 Enterprise SP2 + Exchange 2003 Server doesn't have firewall turned on. it has static private IP .. gateway is pointing to the right one External Static IP is routing fine inwards We've ran out of ideas .. help??

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • When pointing to new DNS servers is there any chance of E-mails being lost if the old E-mail hosting service is still up?

    - by LaserBeak
    I am changing webhosts and will be using the new hosts mail servers instead of the old ones. I have created all the correctly named mailboxes on the new service but have also not yet cut ties with the old webhost. I am expecting that even if the new DNS values which point to the new hosts DNS servers and respective SOA\zone file with the new MX values have not yet propagated and an E-mail is directed at the old hosts mail servers as per the mx records in the SOA\zone records which the old hosting provider holds, the E-mail would still come through to the mailbox that's on the old host providers mail servers. So I am just trying to reaffirm if I got this right and it's essentially impossible for me to loose an E-mail since it will hit either the old hosts mail servers or the new ones ? Also is it possible to configure the same E-mail account to check and collect mail from different mail servers by entering multiple pop3 addresses ? And if I choose to keep the old web hosts mail hosting services as a backup by specifying the mx records for it with a lower priority in the SOA records hosted by the new webhost, is it possible to have any incoming E-mails sent to both servers by the mail daemon so I have two copies? Or is my only option having the primary mail server forward the E-mail somehow to the old mailserver ?

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >