Search Results

Search found 2593 results on 104 pages for 'dell optiplex'.

Page 96/104 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

  • Is there a way to force the monitor to power off in Windows 8?

    - by Rune Jacobsen
    I have googled this a bit and looked at powrprof.dll and PsShutdown but I haven't found a way to do exactly what I want to do. You know that power save option that lets Windows turn off your monitor(s) if you haven't touched the system for x amount of time? Well, I have a PC that needs to be on most of the day (and night), and I have to watch it much of the time, so I can't have a short timeout for automatically turning off the monitor. However, once I leave it for a few hours (happens at varying times of the day), I would like to be able to issue a command that puts the computer in this mode. Not sleep mode, not hibernate mode. Monitor off, that is all. I realize of course I could just turn the physical monitor off. That is not what I want. This Dell monitor takes forever to display a picture from a cold state. If it is turned off by the computer not sending a signal - not so bad. Is there any way for me to do this? As mentioned, the OS can do it, so I would find it really useful if I could do it too. :)

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

  • Windows 8 keeps signing out

    - by bill weaver
    Ran into a strange problem with Windows 8 Pro. Last night i installed Windows 8 Pro as an upgrade on a Sony Vaio laptop that had Windows 7 Pro on it. The install seemed to go okay. Then once installed, live tiles seem to work, native/Metro apps will start okay, but pretty soon after going into an app or settings, the screen flashes a few times and we're back to the lock screen. Signing in appears to do a full login. I've tried this with a local account and with a live.com account. This is someone else's laptop, so we decided to let it breathe, in case the install was still settling in. Well, they say today it's doing the same thing. Open the music app, and within a minute it's back to signon/lock screen. However, they can go to the actual Desktop and run Zune to play music, and it seems happy. In the past, i've installed retail Windows 8 Pro clean on a homebrew system, as an upgrade on a Dell laptop with a zillion apps and drivers, neither with any problems. Also, i've had the consumer preview and release candidate installed as well, no problems. Any ideas on what's going on here?

    Read the article

  • Connecting a laptop to a TV via HDMI

    - by Madmartigan
    I just bought a new Dell XPS17 laptop (Win7) that only has HDMI output. My last 2 laptops had VGA, which I used to connect to my Sony Bravia 32" TV with no issues, but with the HDMI it's been quite a headache. Drivers for display adapters have been updated to the latest versions: Intel(R) HD Graphics Family NVIDIA GeForce GT 550M I went to a store and plugged in to 4 different TVs from different manufacturers. A sales rep and I spent about 30 minutes being baffled by the results (which are the same as my current TV): Extreme buggy behavior in the Nvidia and Windows display/resolution control panel Can not extend or duplicate displays, can only select one Third and fourth output devices "randomly" detected by the Windows control panel Could not get the screen to fit the output (edges cut off on all sides by about a half inch) Resolution and colors less than perfect. Artifacts around text. Display "randomly" cuts out Defaults to TV output only when plugged in Can not change resolution on either device when connected No audio from the TV Plugged in to 3 monitors from different manufacturers: Defaults to duplicated displays when plugged in Everything works perfectly So far, four people have gone through all the settings in the latop with no luck. I had similar, but not exactly matching results with a different laptop. I'm using the Sony Bravia currently at home, but in order to get it to work I have to turn on the laptop, wait until the display shows up on it, close the lid, then cycle through each output channel on the TV until I come back around to the HDMI port again, but still I have the symptoms described above. However: Once in a while, it just works. Sometimes, seemingly randomly, the output fits the screen perfectly. Sometimes the audio comes through the speakers too, but not always. Usually my screen saver "Mystify" will come up with a message that it cannot be displayed due to a limitation of the video card, but then sometimes it works fine. These 3 things seem to be independent of each other and don't always happen together. So, is there any way to get the laptop to output correctly to a TV, or is it just not meant to be?

    Read the article

  • Window 7 Computer name changing on its own?

    - by DC
    Very odd problem... I have a Dell Latitude D830 with XP Pro that has been running on my local domain for many years. I recently Installed Windows 7 Enterprise on the D830 using a brand new HDD so that I could still use XP if I needed by just swapping out the HDD's. I added the W7 installed system to my domain using a completely different machine name than that used for the XP system and everything seemed to be functioning as it should. On boot up over the last 2 weeks or so I occasionally (3 times now) get to the login screen and try to login to the domain only to get an error saying that the Computer name is not a trusted machine in the domain I'm trying to log in to. Come to find out that the machine name on the W7 system has been changed somehow to that of my old XP system. If on the W7 system I then change the name back to the correct name, disjoin the domain, reboot, add the machine back into the domain … all is well for an unknown period of time until this happens again. This last time, I know for a fact that everything was fine the day before when I shut down the system. I came in today, powered up the system and the machine name had been changed to that of my old XP system again. Has anybody else seen this behavior or hav any ideas on what could be causing it? Thanks!

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • How to configure a Linux kernel based on the modules currently in use?

    - by Carla
    Hello, I'm willing to build a minimal kernel with only the needed things for my machine; so I started by compiling the kernel from the ground up, using the default configuration and adding things that I know for sure I have (i.e.: Ethernet card, WiFi card, ...). But there are several other things not so easy to know about (i.e.: the watchdog timer) so I came across AutoKernConf which supposedly detects the hardware of the machine and generates a kernel configuration file with the settings for the found devices. The problem is it contained several settings repeated and even some which I don't have (I'm using a Dell laptop and one of the things it "found" was something of a Toshiba one). So I ended up building a kernel with the configuration that came out of the make allmodconfig command, which is a kernel with most of the things compiled as modules. Booting into that kernel and running lsmod I can see all of the kernel modules in use (the ones really needed) and I would like to know if there is a tool or some way for me to parse that list and convert it to the corresponding kernel configuration file. Or how to map each one with the appropriate options in the kernel so that I can manually set them. Thank you very much for your time.

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • ATI firepro will not detect a second DVI-D monitor

    - by John
    OK so weird issue here. I have previously been running 6 screens off of 3 of the older ATI firepro graphics cards but they had a problem with the heat sink getting too hot and warping the PCB resulting in total failure of the card, to replace my three dead cards I purchased a new-type ATI firepro with the newer heat sink design. I'm only using one at the moment to make sure they've fixed the problem before I waste more money on 2 more cards but this is where things start to get weird. The Firepro's only have one port on them, they connect to two monitors via a splitter cable going from the one port to two DVI connectors for the screens. When I plug two identical monitors in via their DVI inputs not matter what I do windows and Catalyst will only detect one screen. However if I use the VGA input on one of the screens with a VGA - DVI adaptor to plug it in to the card it works fine. This confuses me greatly. I'm currently using the ATI Firepro 2270 Graphics card with identical DELL U2311H screens. I can post the rest of the system spec as well if needed but I wouldn't have thought it would make much difference as it had no problem handling 6 screens before the graphics cards failed. Naturally both catalyst and ATI drivers are the most current version. ATI tech support has been absolutely zero help, they seemed to get stumped as soon as I verified that both screens were plugged in and connected properly. Anyone have any ideas?

    Read the article

  • How to properly remove disk from PERC 6/i RAID controller ?

    - by Stefano Borini
    I have a Dell T710, coming with PERC 6/i RAID controller. The current raid has 2x500 GB hard drives (with the OS), and 6x1000 GB hard drives (in RAID-6, currently empty). I would like to take one 1000 GB disk physically out to keep as an immediate spare in case of a crash, and configure the remaining 5x1000 GB in a single VD RAID-6. This is all nice and clean and works, until I realized that the display on the machine reports the lack of the 8th disk as an error. It's marked as error, but appears to be a warning, since the machine is fully functional. My question is: what is the best way to keep one disk as a spare out of the array? should I disassemble the disk from the cradle and insert the empty cradle in the array ? Or should I just silence the error in the display in some way (how?). I know that what I am doing sounds pretty strange, but here is academia and having a spare disk available could take weeks. Better to have one ready in my drawer for any emergency.

    Read the article

  • Problem uninstalling and installing Java on new pc running Windows 7 64 bit os

    - by Brian Gerrin
    I have a new Dell Studio XPS running Windows 7 64 bit os. I am attending online classes which require IE 8 and Java version 6 build 20. The pc came with IE 8 32 bit and Java 6 build 21 already installed. I tried to uninstall Java using add and remove programs but after about 45 minutes of "Preparing to remove application" I got an error refering to a missing dll file and the uninstall failed. I used a third party program to remove Java and downloaded Java 6 build 20. My problem is when I try to install it I get the box telling me "Installing program ... this may take a few minutes" however after 30 to 45 minutes nothing has happend and there is no indication in the progress bar that anything is happening then all of a sudden the program bar is full and the program is supposedly installed. When I try to run it however it doesn't work. Someone help please! I can't get access to my classwork with out this! Thanks

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Windows 7 comments field missing when browsing network

    - by Toymangenie
    I have just purchased three Windows 7 Professional Dell 64-bit PCs for testing prior to upgrading our company’s 120+ PCs from Windows XP Professional. The setup is a standard domain with a Windows Server 2003 32-bit server. We name each PC XP1 to XP150 so that when users join or leave, I don’t have to rename the PC. We use the Description field to allocate the user’s name to each PC. We also have a share set up on each PC using the user’s name. When I browse the network using Windows Explorer in XP, I get a useful display. The left pane showing the PC number and the right pane showing NAME and COMMENTS So, for example I would see: XP01 Fred Bloggs (Each PC on a new row.) The right pane is my main tool for administering the network. I can easily see the PC number and the name of the user. However, in Windows 7, this seems to have been thrown out of the window and replaced with fields that I do not need and in my case always display the same info. "Name", "Category", "Workgroup", "Network Location" In my case the Name column gives the PC number (XP10) etc and all three other columns display identical useless information. So I can’t see who is using XP10. When I am in “help desk” mode, I would naturally ask the user’s name and use my remote desktop client to view their screen. The user isn’t aware of their PC name, so I am finding it impossible to match the user name with a PC number. Any ideas how to overcome this "by design" change to Windows 7?

    Read the article

  • Laptop Most Likely to Have Good Driver Support

    - by ShabbyDoo
    Through numerous bad experiences, I have learned that the most likely cause of laptop "failure" is the lack of updated drivers for new operating systems. As an example, I have a perfectly good Thinkpad T42 at home which runs Windows 7 just fine for my purposes except that no compatible ATI video drivers are available, and the generic drivers have flicker effects. I recently saw an ASUS laptop which looked quite nice except that I would be beholden to them to release ATI video driver updates customized for it. And, I can't trust them to do that for more than six months. What laptops (manufacturer/line) should I consider so that I could expect at least a couple years of frequent updates? I plan on running Windows 7 and installing whatever successor comes out. I like Intel components (especially WiFi) because I can install their drivers directly from them, and they have a long history of providing updates for years after shipping a particular component. More generally, components from companies which are likely to update drivers frequently are good as long as I can install the component manufacturer-provided drivers without laptop-specific customization (like the ATI drivers). Also, if a component can be replaced easily, I am less concerned. For example, Dell stopped pumping out updated drivers for one of its mini-PCI WiFi cards. The solution was to buy an Intel replacement on eBay for $12! That's fine. I can deal with that. So, what laptops should I consider so that I'm not likely to be stuck between a rock and a hard place?

    Read the article

  • RAID 6 that can read with least 1000 Mbit/s?

    - by Diblo Dk
    I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time. I have configured it with four 2 TByte drives and RAID 6. It have 256 MByt ram and transfer rate of 300 Mbps. The benchmark test showed: Min read rate: 136.3 Mbps Max read rate: 329,6 Mbps Avg read rate: 242,2 Mbps What could I had done to get at least 1000 Mbps? Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps) How would a RAID 10 had performed compared to RAID 6 or 5? Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller? UPDATE: The drives is SATA III 6 Gbps. http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

    Read the article

  • After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB

    - by Tom Crane
    (I have also posted this on technet but I'm running out of ideas) I've upgraded from Windows Server 2008 R2 Standard to Enterprise in order to make use of more RAM. The server previously had 32GB of RAM. The upgrade from Standard to Enterprise, using DISM, seemed to go OK, so I powered down and installed the RAM. This a Dell Poweredge T710, I was taking it from 32GB to 72GB. The BIOS recognised the RAM, although I needed to change from "Advanced ECC" to "Optimizer" mode for it to use all of it. After rebooting, windows can see the RAM but in the system panel will display: Installed memory (RAM): 72.0 GB (4.00 GB usable) In the resource monitor, the remainder of the RAM is showing as reserved for hardware. I've tried various RAM configurations, including reverting it to the same chips and same configuration as before the upgrade, but always just 4.00 GB is showing up as usable. Following some threads on these forums I've gone into msconfig and set the maximum memory "by hand" but that doesn't fix the problem. BIOS doesn't seem to have anything that looks like memory remapping which is another suggestion that has come up. How do I make this RAM available to Windows? It was available before the upgrade, because I could use the full 32GB RAM the server had to start with. A screenshot (this is after reverting to the original RAM configuration) http://screencast.com/t/5FuzevdNb I don't know if it's related, but my remote desktop configuration has also disappeared: screencast.com/t/mYedomeQWS (the bottom half of this dialog should allow me to configure Remote Desktop, it was working before the upgrade but now it isn't).

    Read the article

  • How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

    - by the.duckman
    I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load. One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server. So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance. I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference. I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference. I am lost. What can I do, to figure out, what is going on?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >