Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 806/1073 | < Previous Page | 802 803 804 805 806 807 808 809 810 811 812 813  | Next Page >

  • Removing forward lookup zone broke our site - why?

    - by user102469
    I'm fairly new to the job and trying to get to grips with the infrastructure here. We've moved a site from being locally hosted on our own network to an external host (1&1). I've transferred the DNS hosting from the previous DNS host to 1&1 to keep things simple. Once everything had gone through, visitors that were external to our network were being directed to the new site on 1&1 but requests from within our network were still going to our own server. I noticed in the DNS server that there was a forward lookup zone for the site pointing to our own server still. My (admittedly simplistic) understanding was that pausing that zone would then cause the DNS server to get the address for the site from our external DNS servers and our users would start landing on our new site. However, what happened instead was that they were being met with "page not found" type errors. I've resolved it my modifying the forward lookup zone A record to point to the external web server but would like to get an understanding as to why pausing the zone didn't work. Would deleting the zone work? I am reluctant to try that as creating it again will not be as easy as simply pressing "start". Many thanks.

    Read the article

  • Converting An Older VMWare Image to ESXi

    - by Bart Silverstrim
    I have just installed VMWare ESXi and imported one physical machine without issue and one virtual machine, from an older version of VMWare Server on a Linux system, to it without issue. But I have another machine (Windows XP), virtual running on the VMWare Server, that I can't get the Converter to work with. Converter runs, says everything is fine, but when I go to the ESXi VSphere manager and hit the button to power up that vm, the console stays black with a blinking cursor and the processor for that VM spikes 100% and stays there. An event log item in VSphere warns something about activation issues with Windows. Has anyone else run into this issue before? Is it something with the disk controller? I copied the folder with the VM directly to the storage drive on ESXi hoping to create a new machine and point the data store to that folder or at least that drive image; nope, something about not using an IDE controller (must be something with the older format). In short, converter is doing something that particular machine doesn't like, and I can't find a way to simply open that hard disk image or convert it unless someone else has seen this. I try attaching a bootable CD image for disk repair to see if it can check the hard drive but I can't seem to get the console to boot from it...either too slow on the button or it just isn't able to easily boot from a virtual drive image (.iso). Any suggestions or help?

    Read the article

  • Campus VLAN Segmentation - By OS?

    - by Moduspwnens
    We've been thinking through re-arranging our network and VLAN configuration. Here's the situation. We already have our servers, VoIP phones, and printers on their own VLANs, but our problem lies with end user devices. There are just too many to lump on the same VLAN without being hammered with broadcasts! Our current segmentation strategy has them split into VLANs like this: Student iPads Staff iPads Student Macbooks Staff Macbooks Gaming devices Staff (Other) Student (Other) *Note that our network has many more iPads and MacBooks than most. Since the primary reason we're splitting them is just to put them in smaller groups, this has been working for us (for the most part). However, this required our staff to maintain access control lists (MAC addresses) of all devices belonging in these groups. It also has the unfortunate side effect of illogically grouping broadcast traffic. For example, using this setup, students on opposite ends of campus using iPads will share broadcasts, but two devices belonging to the same user (in the same room) will likely be on completely separate VLANs. I feel like there must be a better way of doing this. I've done a lot of research and I'm having trouble finding instances of this kind of segmentation being recommended. The feedback on the most relevant SO question seems to point toward VLAN segmentation by building/physical location. I feel like that makes sense because logically, at least among miscellaneous end users, broadcasts will typically be intended for nearby devices. Are there other campuses/large-scale networks out there segmenting VLANs based on end-system OS? Is this a typical configuration? Would VLAN segmentation based on physical location (or some other criteria) be more effective? EDIT: I've been told that we will soon be able to dynamically determine device OS without maintaining access lists, although I'm not sure how much that affects the answers to the questions.

    Read the article

  • SLI Video Cards

    - by Scott
    I'm configuring a new desktop. I'm looking through all PCI Express 2.0 x16 video cards on NewEgg at the moment. All cards seem to be stating that they have SLI Support or CrossFireX support. I know that these 2 technologies enable you to connect more than one video card together to create more of a powerful video experience. These cards that have this support seem to be more expensive that other cards though. When I did a NewEgg search for cards that did NOT have these two supports, no matches came up for PCI Express 2.0 x16 (haven't checked other interfaces yet though). Can you run an SLI/CrossFireX card as a single card (and if you can is it efficient or even good?), or do you need to have more than one card to run them? Also, do they make cards without these two technologies anymore, and are they even cheaper at this point? I really have no need for SLI/CrossFireX at all, so I was wondering what I should look into. Do cards without this technology not run on the PCI Express 2.0 x16 interface? Thanks ahead of time for all your help.

    Read the article

  • SLI Video Cards

    - by Scott
    I'm configuring a new desktop. I'm looking through all PCI Express 2.0 x16 video cards on NewEgg at the moment. All cards seem to be stating that they have SLI Support or CrossFireX support. I know that these 2 technologies enable you to connect more than one video card together to create more of a powerful video experience. These cards that have this support seem to be more expensive that other cards though. When I did a NewEgg search for cards that did NOT have these two supports, no matches came up for PCI Express 2.0 x16 (haven't checked other interfaces yet though). Can you run an SLI/CrossFireX card as a single card (and if you can is it efficient or even good?), or do you need to have more than one card to run them? Also, do they make cards without these two technologies anymore, and are they even cheaper at this point? I really have no need for SLI/CrossFireX at all, so I was wondering what I should look into. Do cards without this technology not run on the PCI Express 2.0 x16 interface? Thanks ahead of time for all your help.

    Read the article

  • Successful login with iscsiadm on target still doesn't create block device

    - by Halfgaar
    I've set up an experiment to test iscsitarget and initiator, which at some point worked. Later, I turned the setup back on and much to my dismay, the initiator machine stopped making block devices for its successful logins. As far as I know, I haven't changed anything on either machine. Some details: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun1, portal: 10.0.0.1,3260] Logging in to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun2, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun1, portal: 10.0.0.1,3260]: successful Login to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun2, portal: 10.0.0.1,3260]: successful Sessions: # iscsiadm -m session tcp: [3] 10.0.0.1:3260,1 iqn.2010-12.nl.ytec.arbiter:arbiter.lun1 tcp: [4] 10.0.0.1:3260,1 iqn.2010-12.nl.ytec.arbiter:arbiter.lun2 Netstat: # netstat -n -p|grep 3260 tcp 0 0 10.0.0.2:48719 10.0.0.1:3260 ESTABLISHED 1078/iscsid tcp 0 0 10.0.0.2:48718 10.0.0.1:3260 ESTABLISHED 1078/iscsid /var/log/syslog doesn't give errors: Jan 27 11:41:49 vmnode001 kernel: [ 378.041749] scsi7 : iSCSI Initiator over TCP/IP Jan 27 11:41:49 vmnode001 kernel: [ 378.044180] scsi8 : iSCSI Initiator over TCP/IP lsscsi doesn't show my devices: [0:0:1:0] cd/dvd TSSTcorp DVD-ROM TS-L333A D100 /dev/sr0 [4:0:0:0] disk ATA Hitachi HUA72105 A74A - [4:0:1:0] disk ATA Hitachi HUA72105 A74A - [4:1:0:0] disk Dell VIRTUAL DISK 1028 /dev/sda And there are no block devices in /dev for it: # ls -1 /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 I tried loading all scsi kernel modules I could find, but that doesn't seem to be the problem. I reall don't get this; it used to work. I found people with similar problems (here and here) but no solution. Initiator is Debian Sqeeuze (testing), target is Debian Lenny (stable). iscsitarget is 0.4.16+svn162-3.1+lenny1, open-iscsi (initiator) is 2.0.871.3-2squeeze1. Target kernel: 2.6.26-2-amd64, initiator kernel: 2.6.32-5-amd64

    Read the article

  • Firefox crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Dynamic subdomain routing

    - by Nader
    Hi everyone, I asked this question over at stackoverflow, but got very few views: http://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be programatically managed such that when a new site is created a DNS entry is added. We also move sites around to different app servers, so I would need to be able to update DNS entries in close to real time. Thoughts anyone? Thanks. Update2: I've setup external wildcard DNS pointing at an HAProxy web server whose job it is to route requests to backend servers. The mapping is stored in our internal PowerDNS server. Question now is how to get the HAProxy server (or another) to use the value of the internal DNS and not some config file or access list? – Update: Based on some suggestions below, it seems like reverse-proxy server(s) is the way to go. As I'll be rebalancing the domain-server mapping, these need to work instantly and the TTL on a DNS solution could be a problem. Any recommendations on software to use considering this domain-IP data is stored in a DB, and I'll need this to be performant?

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • simple apache2 reverse proxy setup not working

    - by Nick
    I know what proxy is (very high level), it's just I have never set up one, and it feels like I might be missing some big fat point here. My setup: client server (static IP), runs apache on port 80 proxy (has 2 network cards, one is on the clients network, the other one with a static IP on the server network), runs apache on port 80 I am trying to configure these three machines so that when client requests: http://proxy/machine1 It gets served server's pages at server root URL, i.e. http://server/ I can access client pages just fine. However, when I try accessing a page from the client machine, it simply gets redirected to server's IP address, which it clearly can't access since they are not on the same network: ... <meta http-equiv="REFRESH" content="0;url=http://server/machine1"></meta> <title>Redirect</title> ... My apache2 config is: LoadModule proxy_module /modules/mod_proxy.so LoadModule proxy_http_module /modules/mod_proxy_http.so ProxyRequests off <Proxy *> Order Allow,Deny Allow from all </Proxy> ProxyPass /machine1 http://server:80 <Location /machine1> ProxyPassReverse / </Location> What gives? Thanks!

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP. (solved)

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • GRUB reporting wrong partition type

    - by plok
    It all started when I had to replace one of the disks that the software RAID 1 on this machine currently uses. From that moment on I have not been able to boot to the Windows XP that is installed on the fourth hard drive, /dev/sdd. I am almost positive that the problem is related not to Windows but to GRUB, as if I unplug all the other hard drives so that the Windows XP disk is now /dev/sda it boots with no problem. The problem seems to be that GRUB detects a wrong partition type, which I understand suggest that something is really messed up. This is what I get when I try to follow the steps that until now had worked like a charm: grub> map (hd0) (hd3) grub> map (hd3) (hd0) grub> root (hd3,0) Filesystem type is ext2fs, partition type 0xfd 0xfd? That doesn't make sense. /dev/sdb and sdc are 0xfd (Linux raid), but not /dev/sdd: edel:~# fdisk -l [...] Disk /dev/sdd: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00048d89 Device Boot Start End Blocks Id System /dev/sdd1 * 1 30400 244187968+ 7 HPFS/NTFS edel:/boot/grub# cat device.map (hd0) /dev/sda (hd1) /dev/sdb (hd2) /dev/sdc (hd3) /dev/sdd I have been trying to work this out for hours, to no avail. Can anyone point me in the right direction?

    Read the article

  • Can't log in using second domain controller when first DC is unreachable

    - by rbeier
    Hi, We're a small web development company. Our domain has two DCs: a main one (BEEHIVE, 192.168.3.20) in the datacenter and a second one (SPHERE2, 10.0.66.19) in the office. The office is connected to the datacenter via a VPN. We recently had a brief network outage in the office. During this outage, we weren't able to access the domain from our office machines. I had hoped that they would fail over to the DC in the office, but that didn't happen. So I'm trying to figure out why. I'm not an expert on Active Directory so maybe I'm missing something obvious. Both domain controllers are running a DNS server. Each office workstation is configured to use the datacenter DC as its primary DNS server, and the office DC as its secondary: DNS Servers . . . . . . . . . . . : 192.168.3.20 10.0.66.19 Both DNS servers are working, and both domain controllers are working (at least, I can connect to them both using AD Users + Computers). Here are the SRV records that point to the domain controllers (I've changed the domain name but I've left the rest alone): C:\nslookup Default Server: beehive.ourcorp.com Address: 192.168.3.20 set type=srv _ldap._tcp.ourcorp.com Server: beehive.ourcorp.com Address: 192.168.3.20 _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = beehive.ourcorp.com _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = sphere2.ourcorp.com beehive.ourcorp.com internet address = 192.168.3.20 sphere2.ourcorp.com internet address = 10.0.66.19 Does anyone have any ideas? Thanks, Richard

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Ruckus wireless AP and Dell PowerConnect configuration problems

    - by DanielJay
    We are working on trying to get some Ruckus Access Points to work correctly on our network. Currently our network is as follows: VLAN 10 - Servers VLAN 11 – Computers/DHCP VLAN 12 – Voice VLAN 13 – Guest We use Dell PowerConnect 6248P switches for our switches. Port settings are as follows: ZoneDirector 1100 is plugged into this port. Should be accessing the server VLAN and then allowing all other traffic. interface ethernet 1/g2 classofservice trust ip-dscp description 'Ruckus ZoneDirector 1100' switchport mode general switchport general pvid 10 switchport general allowed vlan add 10 switchport general allowed vlan add 11-13 tagged exit Access point is plugged into this port. The port has to be on VLAN 11 in order to get DHCP. interface ethernet 1/g16 classofservice trust ip-dscp description 'Ruckus - IT' switchport mode general switchport general pvid 11 switchport general allowed vlan add 10-12 switchport general allowed vlan add 13 tagged exit If we tag the traffic from the SSID as VLAN 11 data fails. If we leave the SSID tagged as 1 the data flows correctly. Are there problems with passing tagged traffic to untagged ports? We are looking to see what we can do to get the SSID tagged as 11 instead of 1. Any suggestions?

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • Active FTP client blocked on Windows XP

    - by Ian Hannah
    Summary We have a FTP server (running in active mode). We have an FTP client which is connecting to the server, carrying out a task and then closing the connection. The FTP Client can perform this operation on multiple threads. Problem We have a situation where customers are experiencing occasional failures to carry out operations on a FTP connection. The actual connection has been made to the server but when the server attempts to return data on the data port if fails. Observations We have a simple test FTP client which is running two separate threads. Each thread is performing a recursive listing of files from a root directory. With the firewall running on the client machine the hang happens within a few minutes. If the firewall is turned off on the client machine, the test application seems to run correctly. This does point to a potential firewall issue. However, with the firewall on we can list files on our company FTP server without any issues. If the simple test FTP client runs a single thread then we do not experience any problems whether or not the firewall is turned on. We have another simple test FTP client which was running 4 threads (with each opening a new FTP connection, doing a directory listing and closing the FTP connection as fast as possible) overnight with the firewall turned off. With the firewall turned on it fails in a short space of time. The confusing thing is that if the test FTP client and the FTP server are run on the same machine the failure occurs even though the firewall is turned off. This means that the problem may not be firewall related. Any help with what this could be would be much appreciated. Thanks Ian

    Read the article

  • Self-Resetting Power Strips?

    - by Justin Scott
    We are about to deploy a number of secure kiosks into an environment where they may be prone to lightning strikes and power surges on a somewhat regular basis (southern Florida in a place where the existing electrical infrastructure is, shall we say, a bit out of date). Ideally we would use battery backups on each system, but it's not in the budget. We plan to use a standard power strip with a circuit breaker built-in to protect the computers, but management has asked if there is a power strip that can reset itself after the breaker has been tripped. I've looked around and wasn't able to find such a beast, and it seems to me that it would probably be a safety issue for such a product to exist (e.g. if something plugged into the strip is drawing a lot of current and trips the breaker, you wouldn't want that resetting itself to prevent a possible fire). Nevertheless, if anyone has experience with such a product or can point me in the direction of something that would allow the breakers to be reset automatically or remotely (we don't want to have to send someone to each kiosk every time there is a power surge) I would appreciate any tips.

    Read the article

< Previous Page | 802 803 804 805 806 807 808 809 810 811 812 813  | Next Page >