Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 537/728 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Redundant Microsoft server solution for small company

    - by MadBoy
    I'm planning to change one server Microsoft SBS 2003 with SharePoint, Exchange and SQL database into something that will provide me with some redundancy and won't be single point of failure. I was thinking to buy 2x exactly the same physical servers and put 2 virtualized servers on HyperV or VMWare on each. Then i would put SharePoint, Exchange and SQL on that 1 physical server (shared onto 2x VM's). I would like 2nd physical server to be exact duplicate of the first one so that when 1st server goes down (for reboot or hw failure), 2nd takes care of everything so that users don't even see anything changed (in terms all their emails, sharepoint stuff is available). My questions are: Will I have to pay for licenses for both servers even thou only one instance of SharePoint, Exchange, SQL will be used at same time? What are proposed solutions to do that? Any additional hardware I would need, any complicated software configuration to be expected to configure such redundancy so that when one physical server goes down 2nd one is taking care of rest? What problems should I expect? This solution is for 60 people. Later on it may or may expand.

    Read the article

  • SSL with nginx on subdomain not working

    - by peppergrower
    I'm using nginx to serve three sites: example1.com (which redirects to www.example1.com), example2.com (which redirects to www.example2.com), and a subdomain of example2.com, call it sub.example2.com. This all works fine without SSL. I recently got SSL certs (from StartSSL), one for www.example1.com, one for www.example2.com, and one for sub.example2.com. I got them set up and everything seems to work (I'm using SNI to make all this work on a single IP address), except for sub.example2.com. I can still access it fine over non-SSL, but on SSL I just get a timeout. If I go directly to my server's IP address, I get served the SSL certificate for sub.example2.com, so I know nginx is loading the certificate properly...but somehow it doesn't seem to be listening for sub.example2.com on port 443, even though I told it to. I'm running nginx 1.4.2 on Debian 6 (squeeze); here's my config for sub.example2.com (the other domains have similar configs): server { server_name sub.example2.com; listen 80; listen 443 ssl; ssl_certificate /etc/nginx/ssl/sub.example2.com/server-unified.crt; ssl_certificate_key /etc/nginx/ssl/sub.example2.com/server.key; root /srv/www/sub.example2.com; } Does anything look amiss? What am I missing? I don't know if it matters, but StartSSL lists the base domain as a subject alternative name (SAN); not sure if that would somehow pose problems, if both subdomains list the same SAN.

    Read the article

  • Zero sized tar.gz file found inside a tar.gz file

    - by PavanM
    My current directory contains a single file like this- $ls -l -rw-r--r-- 1 root staff 8 May 28 09:10 pavan Now, I want to tar and gzip this file like $tar -cvf - * 2>/dev/null |gzip -vf9 > pavan.tar.gz 2>/dev/null (I am aware I am creating the zipped file in the same directory as the original file) When I run the above tar/gzip commands around 20 times, a few times I observe that the final tarred and zipped file pavan.tar.gz file has a ZERO sized pavan.tar.gz file. I am not sure from where is this zero sized file coming into the archive from. Note: I am NOT running tar/gzip commands on an already existing tar.gz file. I always make sure that the directory has only one file before running the commands On googling, as described here, I suspected that the tar.gz being created was also part of the file being archived. But in my case, gzip is the one who's creating the final file and by the time gzip runs, tar should be done tarring. This is happening on AIX but I've used Linux tag too, to draw more attention, as I guess the problem is platform independent.

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • What to do before connecting Ubuntu Server to the internet for the first time?

    - by CodeMonkey
    I just finished installing Ubuntu Server 12.10 on an Asus Eee PC 1000H (to be used as a home server/sandbox) from USB. I installed this software during installation: OpenSSH server LAMP server Samba file server Virtual Machine host I won't use 2, 3 or 4 for a while though. Can/should I turn these off somehow? I have turned home directory encryption on. Security updates are installed automatically. I have chosen a strong password for the single user. I have never plugged in the internet cable so far. Before doing so I'd like to ask: What can/should I do/install to increase security before connecting to the internet? Firewall? Fail2ban? Users/Passwords? Encryption? Enable/Disable functionality? etc. I'm sorry if you get this question a lot. I've searched around quite a while, but it still feels like I might overlook something important.

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Accidentally deleting all OSX users using dscl

    - by gutch
    OK, so I just did something really stupid and deleted all the user accounts on an OSX 10.6.6 machine by running this: sudo dscl . -delete /users What I actually wanted to do was delete a single, troublesome account using a command like this: sudo dscl . -delete /users/localadmin ...but I absent-mindedly pressed return too early and deleted the lot. I've tried using -list and can confirm that I have indeed wiped all the accounts. The machine is currently running fine, but I'm sure that once I log out / reboot then it will be completely broken. I don't mind that I've deleted the normal user accounts (there was only one I wanted anyway). But it's surely going to be a big problem that system accounts like _installer and _jabber and _lda and _windowserver etc etc are gone. So my question is, how can I restore the standard set of system accounts? Do I have to reinstall OSX from scratch? Or can I either: undelete those system accounts, or run some command to recreate the system accounts?

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • How to enable the 2 concurrent (+1 console) sessions on Windows Server 2012

    - by Dai
    I have a Windows Server 2012 VM running on Windows Azure. I want to enable the ability for 2 simultaneous administrative sessions over Remote Desktop. This is permitted under the EULA for Windows Server 2012. This is NOT the same thing as the fully-blown Terminal Services / RDS feature. In Windows Server 2000 and 2003, multiple concurrent sessions (up to a limit of 2, plus the root /console session) were enabled by default (such that logging-in via RDP without logging-out first would create a new session rather than reconnecting to the old session). In Server 2008 and later it uses single-sessions by default, as this simplifies administration (as most people want to connect to old sessions). In Windows Server 2008 R2, you can add the MMC snap-ins for Remote Desktop Host Configuration which allows you to re-enable concurrent sessions. However, in Server 2012, after adding the Remote Administration snap-ins from Server Manager it seems the Remote Desktop Host Configuration snap-in has been removed. How can I re-enable the multiple concurrent sessions for Remote Desktop for Administration in Windows Server 2012?

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

  • Terminating multi-mode fiber

    - by murisonc
    I'm looking at the feasibility of terminating multi-mode fiber connections ourselves. We would be using LC connectors. I've done some research and found two different methods. One requires polishing the ends and using epoxy while the other doesn't. I like the idea of not having to polish the ends but there doesn't seem to be much information on quality or ease of use. I've found two vendors (3M and Corning) that offer kits for terminating fiber without polishing or using epoxy. Does anyone have any experience with both methods that can offer some advice? Copper is easy but fiber seems to be a whole different animal. EDIT: After looking into fusion splicing suggested in the answer I've determined it's not for us. It's my understanding that is primarily used for outside plant and is better suited for single mode fiber. It's a good answer but doesn't address the question directly. Some more information about our situation. We will only be terminating multi-mode fiber inside a building and only doing between 4 and 20 pair a year. Hiring an outside person won't work due to our location. There are currently a couple people on-site that can terminate fiber (working for another company and charging large fees) but they can only do ST and SC connectors and we only use LC. So once again does anyone have experience with terminating using both epoxy type connectors and the other type (similar to Corning Unicam)?

    Read the article

  • NATing IPv4 while routing IPv6

    - by Hugo
    I've the following setup: client(s) <---> (eth0) router (eth1) <---> wan I have a static IPv4 address and a /48 IPv6 address block. I need to connect all the clients to (wan). Each client will have it's own public IPv6. Meanwhile, I need to NAT those same clients over to (wan). Everything IPv4-related and the NAT are working fine. The IPv6 communication to/from (eth0)<-(clients) works fine, as does the IPv6 communication from (eth1)<-(wan) works fine. To provide IPv6 to all my clients, I've thought of too choices: Having the router as a gateway, which different IP on each interface. This sounds like I need to tell my ISP to route the entire block through that single IP, so it's not really an option. Transparently pass IPv6 packets to/from eth0<-eth1, so all clients can communicate with the upstream gateway (I would actually have a switch here if it weren't for the need to remain IPv4 compatible). So, since I've opted for the second choice, I'm in doubt: How can I pass all IPv6 traffic from eth0 to eth1 transparently? What I need is a level 3 bridge, but linux's bridgeutils create a level 2 bridge (which would bridge ipv4 as well, and I can't have that). This is a DD-WRT device, but it's pretty much an embeded linux, so most suggestions that would work on linux are welcome. Thanks.

    Read the article

  • Domino to Exchange 2007 (or 2010) Design Concerns?

    - by NickToyota
    Today we got the executive green light to proceed with changing from a Domino platform to Exchange. The business prefers Exchange for a messaging platform. (even though IMO IBM Domino is fine - if it aint broke, don't fix it but it was not my call). I have been put in charge of Domino to Exchange process goes smoothly as possible. I have also been told to put together costs for this project. I have some questions and concerns re: network design, licensing, costs: The current setup is as follows. 1 HQ office (100 users), 1 secondary office (50 users), 5 branch offices (under 10 users). 5 different email domains Windows Server 2003 functional level with a few 2008 R2 Servers Lotus Domino Notes Servers (one in each office) Ironmail Appliance Public Domino Web Mail server Majority G5+ Proliant Servers Domino Blackberry Enterprise license and server No VoIP phones What are the basic hardware requirements for Exchange 2007 or 2010? Can I simply purchase a single physical server? Will each office require an Exchange server or possibly additional servers (roles)? How is email routed to the smaller branch offices? Standard or Enterprise licenses? The business has been running Domino (messaging and application services) for over 10 years and also want Exchange to support email services, Blackberry, Outlook Web Access, possibly support for iPhone devices. Thank you Serverfault universe.

    Read the article

  • Microsoft Security Essentials & MsMpEng.exe hogging resources

    - by Mike
    I've been using MSE for a couple months now, never had a single problem. All of a sudden the process "MsMpEng.exe" will randomly go crazy and hog all my system resources so I can't do anything unless I kill it in the task manager. (I've quit the program for now and my comp is running smooth). When I restart the program, reboot, whatever, it goes off and hogs all the resources again after a couple minutes. If I kill the process it will go away and then come back a couple minutes later and do the same thing. I've scanned with MSE, another antivirus and malware with no probs. Any ideas? Should I uninstall and find something else? The thing is I've liked it so far. I'm running Win7 64-bit. Also, I'm not running any other conflicting security programs. This is the only one on my PC right now. Windows Defender is also off.

    Read the article

  • How do I increase maximum attachment size in Exchange 2007 SP1?

    - by AspNyc
    I've been looking all over for a relatively simple answer to a fairly straightforward question: "how do I increase the maximum size of attachments that can be sent and/or received in Exchange 2007?". But I have yet to find a solution that works. We have a pretty straightforward setup: Exchange 2007 SP1 running on a single server, with the OWA role delegated to a second server. We did a clean install of Exchange 2007 a year or two ago: we did not upgrade from a previous version. I forget if we installed RTM and then patched it to SP1, or if we installed with SP1 already baked in. I just thought I'd mention those items, in case they influence the answer. So far, I've tried running the following Powershell commands on the main Exchange server and verified that they've taken effect: Set-TransportConfig -MaxReceiveSize 40MB Set-ReceiveConnector "RcvConnector" -MaxMessageSize 40MB Set-MaxReceiveSize "MailboxName" -MaxReceiveSize 40MB As of right now, though, the specified mailbox is still rejecting messages over 10MB. You get brownie points if you can also tell me how to set the default mailbox attachment size limits, so that new accounts don't have default Set-MaxReceiveSize values of "unlimited" they currently do. Any advice or suggestions would be greatly appreciated. Tx in advance!

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • Does a Windows 7 dvd only have one language?

    - by user326639
    I'm a Dutch developer living in Spain. I recently composed a new computer from new parts and I installed Windows 7 Professional 64 bit (OEM) on it. On the web site of the on-line shop there was a note saying "language: Spanish". Because my English is quite a bit better than my Spanish, but mainly because it is much easier to find information on the web in English, I want my OS to be in English. I asked the on-line shop if they also sold the UK version of Windows 7 but they assured me that "all Windows 7 versions are multi-language". With the installation of XP a few years ago, I remember that I was offered the option English or Spanish while the installation process was still in the DOS-like (non-graphical) screen. While installing Windows 7, I did not see any non-graphical screen and the first time I was asked about the language, was in a drop-down list that contained only Spanish. I know about the language pack possibility of Windows 7, but this is not available on Professional. Even if I had Ultimate, I don't know if it would be possible to install Windows in Spanish, and then add English as a second language from a language pack. I get the impression that English has to be the base-language. Furthermore, I am a bit sceptical until I'd see it in action. What happens with shortcuts (i.e. Select All: ctrl-a in English / ctrl-e in Spanish, and what about logging messages in Event Viewer, etc) So can anybody tell me how it works with languages in Windows 7? Have I been misinformed by the computer shop? Could it be that OEM versions of Windows are single language an a full installation is not?

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • SharePoint solution package not deploying to all front-ends

    - by Alex
    I have a WSP that contains a web part. It's being built using WSPBuilder. Most of the time, the WSP deploys perfectly. However, in two of our test environments (and sadly, in production, too) the WSP doesn't deploy properly to all the web front ends. The assemblies make it into the GAC, and the .webpart files get provisioned. The problem is that a tool part that the web part relies on for configuration simply fails to appear. I've determined that every time this has happened, it has been isolated to a single web front end. I've been able to resolve the issue by doing an stsadm -o deploysolution to re-deploy the solution, and in one instance it was resolved by the end user deactivating/reactivating the feature. Unfortunately, though, this has made it impossible to determine if the control isn't being deployed properly, or if it's some other issue. Any thoughts on this? Could it be a problem with the WSP, or is it likely to be environmental?

    Read the article

  • Why are folders disappearing in Windows XP?

    - by XenoFoxx
    I am researching a problem for a friend, and unfortunatly do not have direct access to his computer. I've tried to gather as much information as possible and I have researched it on various websites. I've not found anyone having the same problem my friend is having. So here goes: He has a media server in his home running Microsoft Windows XP. It has 3 drives, 1 for the OS and 2 for mass storage. Not long ago he went to access one of the mass storage media drives and it was empty, except for a single folder. His first assumption was that his roommate had deleted everything on the drive (excluding the remaining folder). He then checked the properties of the drive and it was still saying that the hard drive was nearly full. I told him to check the recycling bin, thinking that whoever deleted them didn't clear them from recycling and that they were still taking up space on the drive. My friend said the recycling bin was empty. So we have a drive that the Windows file management system says is empty (again except for the remaining folder), but the properties of the drive say it's mostly full. Now it gets weirder My friend tried to create a new folder on this drive and it auto-named itself "New Folder(1)" which means that it recognizes there is already a "New Folder" in that directory. He tried to rename it to a name that he KNEW was there previsouly, and Windows wouldn't allow it because it was a duplicate folder name. SO now it seems the folders are there, but not displaying in Windows Explorer. Both of us have no idea why this is occuring, why the folders vanished, why the one remaining folder didn't vanish, or how to make them visable again. Anyone else ever experience this? I can get more details if needed.

    Read the article

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >