Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 137/389 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Network Security Device/Software

    - by Campo
    We currently run Symantec Antivirus Corporate 10.2. The software is really easy to manage on a network but the actual virus detection isn't bad but the malware detection is crap. We recently were infected with a email bot that got us put on some block lists. This has been resolved. I cannot have that happen again. I would like to find a program as easy to manage as symantec that I can install on all the user's workstations as well as the servers. We run a windows 2003 domain. We have a couple 2008 test servers in the environment. Most of the workstations are xp though I am using windows 7 and symantect is not compatible with this OS... So we need a solution that would cover all those operating systems. If it could be installed on macs too that would be a bonus though not necessary at all. This software must detect: Viruses AND Malware I am looking for something that combines the features in anti-malware programs like malwarebytes or spybot with an antivirus program like symantec or AVG. Alternatively if there is a piece of hardware that is a firewall, router, and packet inspection for virus/spam that would be the most ideal solution. I then could supplement with a piece of software that could pickup what the hardware misses. Thank you for your suggestions.

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • GPO - Setting not applied, although policy is applied

    - by Kenny Bones
    This is rather strange. In our domain we have several terminal servers and this morning a user reported that no drives are mapped when he logs on to the terminal server. So, I checked Group Policy Results and compare two users. Both users have the exact same policies applied. But for this particular user, the Script section under User Configuration - Policies - Windows Settings is just not there. For the other user, which this is working fine for, it says under the Script section that Winning GPO is Terminal2008, which is the GPO that contains the script section. And the Terminal2008 GPO is applied to both users. Also, the loopback processing is set to Replace. What could be the cause for this? I've never seen this particular issue before. I mean, both users are in the same OU, they log on to the same terminal server and the same policies are applied to both. They do not however have the exact same group memberships, but should that matter? It's not stated that the script should be run only if the user is a member of a certain group either. Not sure if that could be done through that specific setting either.All I know is, the very same policies are applied to both users, in the same OU and the same computer. Meaning, the same policies should be applied? Edit: I just ran Group Policy Results on one of the other terminal servers, which are also in the same OU, and the Scripts section is there! This means that this particular user don't get this setting when he's logged onto this particular server. What could be the cause of this?

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Force delivery retry without restarting the SMTP Service on Windows Server 2008 R2

    - by Mathias R. Jessen
    I have a Windows Server 2008 R2 box hosting 3 virtual SMTP servers; vSMTP01, vSMTP02 and vSMTP03. The first two are configured to deliver all messages to dedicated smarthosts, while the last is set to just deliver the messages on its own. All other delivery settings are as default ----(vSMTP01)-----> {SMARTHST01} / ----Inbound mail--->---SMTPSRV01---[----(vSMTP02)-----> {SMARTHST02} \ ----(vSMTP03)-----> { Internet } Now I want to take SMARTHST01 out for maintenance, but I don't want to reject submissions to vSMTP01 while doing so, so I just let it continue running. When SMARTHST01 is no longer responding, vSMTP01 queues the messages and wait for the first retry interval to pass (15 minutes). So far so good. Let's say SMARTHST01 gets online again after 20 minutes. The first interval has passed, and I'll have to wait another 25 minutes for the second retry interval to pass. If I stop and start the SMTP Service (Services.msc - Simple Mail Transfer Protocol service - Stop), the server will retry all deliveries, but that would cause a service interruption for ALL virtual SMTP servers on the machine, which is highly undesirable. How can I manually force vSMTP01 to retry delivery of all queued messages without interrupting the service of vSMTP02 and vSMTP03?

    Read the article

  • Securing NTP: which method to use?

    - by Harry
    Can someone good at NTP configuration please share which method is the best/easiest to implement a secure, tamper-proof version of NTP? Here are some difficulties... I don't have the luxury of having my own stratum 0 time source, so must rely on external time servers. Should I read up on the AutoKey method or should I try to go the MD5 route? Based on what I know about symmetric cryptography, it seems that the MD5 method relies on a pre-agreed set of keys (symmetric cryptography) between the client and the server, and, so, is prone to man-in-the-middle attack. AutoKey, on the other hand, does not appear to work behind a NAT or a masquerading host. Is this still true, by the way? (This reference link is dated 2004, so I'm not sure what is the state of art today.) 4.1 Are public AutoKey-talking time servers available? I browsed through the NTP book by David Mills. The book looks excellent in a way (coming from the NTP creator after all), but the information therein is also overwhelming. I just need to first configure a secure version of NTP and then may be later worry about its architectural and engineering underpinnings. Can someone please wade me through these drowning NTP waters? Don't necessarily need a working config from you, just info on which NTP mode/config to try and may be also a public time server that supports that mode/config. Many thanks, /HS

    Read the article

  • Can't access random web pages on my MacBook Pro 2012?

    - by Faruk Sahin
    Sometimes I can't access random web pages. The page simply doesn't load. If I wait for around a minute doing nothing, it will load. It happens very random and very intermittent. Sometimes it starts when I try to access youtube.com or cnn.com. When it starts, it happens once in a minute or once in 5 minutes for random web pages. But if I am downloading something, the download continues without any interruption. And also I am able to ping the address I can't browse. Then if I wait for around a minute, everything starts to work fine at the browser side also. I have tried a lot of different browsers. I have tried changing my DNS servers to Google's public DNS servers. Using a cable instead of the wireless connection doesn't work either. No one else in the network has this problem, but me. What can be the problem?

    Read the article

  • Apache Server with memcache, varnish and php slow request times

    - by coolestdude1
    My issue is that these servers are taking rather long for request about 2 seconds on average just to serve files. When we had just one server doing everything it was noticeably faster even with the same web app (Drupal 6 and Drupal 7). I want to get this number down to a reasonable level and so I need some help getting to the bottom of why the request times are so slow. This can cause the webapp to hang on post or put and generally leads to a bad user experience on my sites. PS: I am more of a server newbie so this has confounded me for quite some time. The domains: collabornation.net nptrainingworks.com (they run off the same two webservers using vhost configs) The Gear: Two Rackspace 4 Gig servers running CentOS 6.2 Final They have a mounted file system (gluster) that is used to keep files the same on both machines. They are behind a rackspace load balancer running round robin. Mysql is run using php-pdo and php-mysql as such mysql is run on another instance running memcache on that machine with phpMyAdmin located there as well. Apache version number 2.2.15-15.el6.centos.1 (httpd.x86_64) Varnish version number 3.0.2-1.el5 (varnish.x86_64) PHP version number 5.3.14-1.el6.remi (php.x86_64) Configs Linked Below Apache Conf Vhost Conf Varnish Backends Varnish Defaults Varnish Acl PHP INI Again need some help, much appreciated!

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • Looking for easiest, most simple solution to run a customised DNS Server for my local network on Windows 7.

    - by Jamie G
    I need to forward some websites, such as http://testing.server/ to an fixed IP address on my local network. I can do this easily on one computer using the hosts file. However, I need this to work for all machines on my network. I think the best way to do this will be to setup my own DNS Servers and add the custom DNS settings there. However, I'm looking for the simplest way possible to do this - I really don't want to spend hours setting up Unix Servers and running tricky terminal based scripts just to do this! My server is a standard Windows 7 machine. My dream would be a nice simple windows program with a GUI where I could input my ISP's DNS server and it would use those records, unless I had specifically set up my own DNS for a domain to use instead. If it had a web based admin system that was accessible from another computer on the network that would be even better. Does anyone know of anything that can do this? Many thanks indeed.

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Cannot connect to MySQL on RDS (Amazon Web Services) from my laptop

    - by Bruno Reis
    I'm having some trouble connecting to a MySQL 5.1 server on an RDS instance on AWS from my laptop. The detailed description of the problem is here: https://forums.aws.amazon.com/thread.jspa?messageID=323397 In short: I have 2 MySQL servers, both with the same db configuration and firewall (security group) configuration. One of them works fine: I can connect to it from my EC2 instances (ie, from inside the AWS cloud) and from my laptop. The other one doesn't: I can connect from my EC2 instances but not from my laptop. The symptom: a connection attempt from my laptop just hangs, and then times out, as if there was a firewall blocking me (ie, silently dropping my SYN packets). I must say that everything has been working fine for a very long time, and this problem began suddenly, 3 days ago, without any modifications to DB parameters or the security groups. My current analysis of the situation: The firewall (ie, security group) cannot be the problem: both MySQL servers share the same firewall configuration -- I can connect to one of them but not to the other. Later on, I even added a rule to allow inbound connections from 0.0.0.0/0 (ie, I turned off the firewall), and nothing. Oh, I also created a new, fresh security group and changed this instance's SG to the new one (to which I first added my ip address, and then 0.0.0.0/0) but still nothing. The credentials cannot be the problem: I use the same from my laptop and from my EC2 instances -- and the user (which is what Amazon calls master user), in the database, has a host of '%'. MySQL is not blocking my IP due to, say, too many failed connection attemps: I've FLUSH HOSTS on the database, and also I tried to connect using many different source IP addresses, even from all around the world through a VPN proxy service. What could I be missing? I'm asking here because it's been about 36 hours since I've posted on AWS forums but got no answer at all over there... someone here might have a solution! Any input is really appreciated, I'm out of ideas. Thanks!

    Read the article

  • Connecting to Server 2008 shares fails

    - by Chris J
    I'm having problems getting a reliable share working on an x64 Server 2008 R1 SP1 server. All works well after a reboot, but after some time (within a day) the shares become unavailable to XP and Server 2003 servers. Interestingly, they remain available to other Server 2008 servers. On trying to access \\server\share, Server 2003 returns immediately and simply gives me the message "The specified network name is no longer available", XP takes a minute or two to timeout before giving the same message. There doesn't seem to be anything in the event logs indicating a problem. Doing some googling over the last day or two I've seen the following blamed: Bad network drivers ... I've updated to the latest drivers with no result Symantec anti-virus ... we're not using it (currently no AV on the server) Receive window auto-tuning ... I've disabled with netsh int tcp set global autotuninglevel=disabled and netsh int tcp set global rss=disabled None of these have had an effect. Windows Firewall is currently disabled. As other Server 2008 boxes (both x32 and x64) can connect, I can only assume that there's some new security configuration that's not quite right - or there's an AD issue that I need to trace, but don't know where to start. Even if anyone doesn't know how to resolve, if someone knows what I need to look for with Wireshark this would be a help.

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • When to raise domain functional level?

    - by Joel Coel
    We very recently completed a project to retire two old domain controllers running Server 2003 R2. They are now replaced with shiny new 2008 R2 boxes. However, the functional level of the domain has not yet been updated for the 2008 R2 servers, just in the long-shot case of the need for a rollback to the old controllers. I expect to have the all clear to update the domain by next weekend. I also want to note that our desktop clients are still 95% Windows XP. However, we're about to start a project to update our 200 or so clients to Windows 7 before the end of the calendar year. Is there any advantage to holding the domain at the 2003 functional level while we are still supporting more Windows XP than Windows 7, especially given that some of the management stations are still XP? Update: I forgot to mention earlier that we still have a pair of windows 2000 servers (not domain controllers) that support some legacy software. I'm working to replace those, but in the meantime I need to be sure that Windows 2000 can still participate in a 2008 R2 domain.

    Read the article

  • Windows module installer delaying login, server 2008 R2

    - by Kyle
    We updated our servers this weekend (windows updates), everything went fine except one of our terminal servers now hangs at login with the message, "waiting for windows modules installer." It eventually times out and leaves an event log message that the service has stopped unexpectedly. I have disabled the service and users can now login in a reasonable time frame. However we will need to re-enable the service in order to install further updates. I'm not sure where to start with this one, I'm an entry level admin and my colleagues are on vacation today, thank God this isn't a serious problem. Further details: -It affects all users. -The only third party software on the server is our ERP software and screwdrivers from Tricerat. -The only event log message is that the service has stopped unexpectedly. -The server manager screen does not display any information about roles it just says, "error". -The remote desktop roles all seem to be functioning properly, Remote app works as well as standard RDP. Let me know if there is any further details I can provide, I will be checking this frequently throughout the day.

    Read the article

  • Creating a Logical network diagram

    - by user273284
    Im a student and I have been assigned the task of creating a logical network diagram for the following scenario There are 2 buildings, the first is the head office and the second is the branch. The data centre is in the head office, it contains domain controller, mail server, file server and a web server. it provides wired and wireless access to the staff. the branch building is new and it does not have a network. The two buildings must be connected using a VPN connection. The branch building will not have any servers but just network devices that will provide the connectivity, the users in the branch building will be connected to the head office over the VPN. I had created a diagram based on this scenario, but my teacher rejected it saying that it does not follow Cisco hierarchical Model and the servers were not placed correctly in the diagram. I just wanted some help in this matter so that I Can create my network diagram correctly. If anyone could upload a picture of how the logical diagram should be for this scenario will be helpful, any other resources would also be great.

    Read the article

  • Windows 7 PC freezes frequently with hard disk light constantly on

    - by Senthil
    I recently replaced a defective hard disk with a new one - Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart" I have been using the new hard disk with a Windows 7 installation for about 4 days. Now it has started freezing frequently. Sometimes every 2 minutes and sometimes every 10 seconds. There is lots of software installed - I am a developer and my PC is full of IDEs, database servers, web servers, developer tools, testing tools, all browsers etc.. My windows is up to date as of now. I have installed ALL updates including optional drivers etc. All my installed software is up to date. I scanned my computer using Microsoft Security Essentials and found nothing malicious. I did a chkdsk /r and found no problems. I did a memory diagnostic and found no problems. When I go into safe mode, it doesn't freeze and I am able to use it normally for longer periods of time. What other steps can I take to locate the problem?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >