Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 492/909 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • SBS 2008 Sharepoint Database good Memory Limit.

    - by ldelgado
    I manage a small network running on Small Business Server 2008. Lately, the Sharepoint embedded database is getting out of control with its memory usage. I've got a total of 16 GB of RAM on this server, and the Sharepoint database sometimes uses almost 8 GB of RAM. This never happened before, and it started happening after I installed Backup Exec 2010. It happens after a backup is performed. So I suspect there is a memory leak involved. I am working on that issue, but this question isn't about that. I would like to limit the amount of memory the Embedded database uses. I know how to do it. My question is, what would be the ideal amount of memory that I can allocate to Sharepoint? There are only 4 users on my network. One of the users uses two computers but not at the same time. They use sharepoint for a company calendar, and sometimes they share files that way also. Let me know if you need to know anything else. Thanks,

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • How intrusive is using VPN?

    - by Slade
    My company lets us work from home sometimes using VPN (during weather emergencies and stuff). When logging in a big window comes up that says the network is private and for employees only and that there's no right to privacy while using VPN. It makes sense that they don't want people poking around their network but I wonder if the company can use the connection to look around my computer while I'm connected. I'm not entirely computer-illiterate but I'm not a networks person at all so the technical documents I've found don't help me. Is that possible, and if so to what degree? UPDATE Thanks Mark. The funneling thing is what I was really asking about. Mostly I was worried that I would already have some IM conversation open or log into eBay forgetting that the VPN was open and that my company IT people would see it or that they would log my eBay password. Thanks again. ANOTHER UPDATE What if my son wants to play online poker or Warcraft etcetera while I have VPN on to work? Can my company think I'm the one playing if I am not typing often?

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing.

    Read the article

  • Onboard Ethernet suddenly stopped working to router

    - by AfterschoolHobbist
    Hey guys, yesterday I have a sudden problem of my Ethernet connection to my router. My computer is a Pendium 4 with Window XP SP3 running on it and it was working fine earlier in the day, but yesterday night was the start of the problem. My computer is unable to ping to the router and to any other website and unable to get internet connection. As beside it was connected to a hub, I directly connected it to the router directly and the same problem occurred, being unable to ping to the router (and connecting to it) and still no internet access. My directly connected to the modem and the problem still persisted. For each connection, I connected it to my computer and my laptop and my laptop was able to connect, while my desktop computer was unable. I wondered if it was a OS issue and ran a live Ubuntu CD to see if Ubuntu was able to connect to the internet, but the issue persisted and I was unable to get internet access. I then set my router's lease time to 1 hour and waited. After 1 hour, the lease for my computer was removed and I hoped this worked, but it didn't work, but something strange is acting up. My desktop computer is still unable to ping to the router or connect to the internet, but for some reason, my router and desktop computer is still able to contact each other by providing a lease of an local ip address. The router record of a lease to my desktop computer, and when I do ipconfig, my desktop also recognize that it has been provided a local ip address. I have concluded that this is a hardware issue and the only solution to fix this is to by a network card adapter, but I am wondering if anyone has any solutions that could explain why this happen, why my mac address is 01-23-45-67-89-ab, and is there any way to fix it without buying a new network card? Thanks in advance.

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • One workstation gets slow access to the server, but others are fast

    - by Mike Hanson
    I've just setup a machine with Windows Server 2008. It hosts various services, like IIS, POP3, SMTP, Music for Squeezeboxes, VNC. All was working well for the first week or so. One day I needed to create a mapped drive on the server, so it could access files on my workstation. Windows indicated that Network Discovery was needed, so I turned it on with the "Home / Office" option (rather than "Public"). This may be coincidence, but since that time I've been having troubles accessing various services from my main workstation (running Windows 7/64): POP3 continued working correctly, but SMTP was delayed or failed entirely. (Telnet took 20 seconds to connect, but Outlook would never send messages.) VNC failed entirely. I reinstalled it on the server, and now it works but feels sluggish. The music web server was extremely delayed and usually failed. I tried reinstalling, and now it takes about 30 seconds to show the page name on the browser tab, and another 30 seconds to display any page contents. Other machines on the local network seem fine, as do machines connected via the Internet. I don't believe I changed anything on my own machine that would cause this. I considered the possibility that my anti-virus was involved, so I uninstalled AVG (commercial version), but that didn't help. I installed Norton 360 after that, and it didn't complain of viruses on my machine, and the delays remained. Because only my machine is affect, I'm tempted to blame it, except that reinstalling software on the server improved the situation, so there is almost certainly something going on with the server too. The firewall has all the necessary ports open, and it works fine for the other workstations (including external machines connected via the Internet), which indicates that it should be OK. Any ideas?

    Read the article

  • SQL Server 2008 services error on account

    - by TheDude
    I installed SQL Server Enterprise, but can't get it to work. It is a stand alone, on a laptop for development purposes. No network is involved, no other users. The OS is windows 7. Now, I keep receiving eventId 7000, which means that access is denied for the user (the user was Network Services). So, after reading up on it, I kind of got the idea that a user account should be created with minimal privileges. So, off I went and added a user, SQLservices. In the SQL Server Configuration Manager I right clicked SQL Server(MSSQLSERVER), and in the properties I added my new user. Well, here's mister eventId 7000 again. I don't get what I am doing wrong. Also, this new user ends up on my start-up screen. I don't think I want that... I mean, it would be weird to have x number of users crowding up my start-up screen just because I created those for my windows services... The error I get when I add the user in SQL Server Configuration Manager is as follows: Permission Denied. [0x80070005] Helps!

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Elevating UAC via .bat file?

    - by jslaker
    Pretty straightforward one that I'm having trouble finding an answer to. serverfault previously helped me with finding a way to automate Windows updates without using WSUS. It's working fantastically, but to run it over the network, you have to first mount a shared drive. That's pretty simple XP since you just mount the drive and run the updater. On Vista and W7, though, this all has to be done with elevated privileges to work correctly. The UAC account can't see network drives mounted by the regular user, so in order to get everything working, I have to mount the share via net use from an escalated shell. I'd like to automate mounting this share and launching the updater via a simple .bat file. I could probably just instruct everybody to right click "Run as Administrator" on the .bat file, but I'd like to keep things as simple as possible and have the .bat automatically prompt the user to escalate their privileges. Since these computers don't belong to us, I can't count on anything like Powershell being installed, so that rules any solution along those lines out and pretty much have to rely on things that would be included in an RTM Vista install. I'm hoping I'm mostly missing something obvious here. :)

    Read the article

  • No outbound internet connection after restarting CentOS 6.3

    - by wnstnsmth
    After restarting a headless CentOS 6.3 machine, it lost outbound internet connectivity, i.e. I can still connect to the server via SSH (ssh root@**.126.18.56), but stuff such as ping google.com gives google.com: unknown host, and yum list some_package gives a lot of network errors. This is what ifconfig gives: eth0 Link encap:Ethernet HWaddr 00:25:90:78:2D:5D inet addr:**.126.18.56 Bcast:**.126.18.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe78:2d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75594 errors:0 dropped:0 overruns:0 frame:0 TX packets:787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7074741 (6.7 MiB) TX bytes:144391 (141.0 KiB) Interrupt:20 Memory:f7a00000-f7a20000 eth1 Link encap:Ethernet HWaddr 00:25:90:78:2D:5C UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:16 Memory:f7900000-f7920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:504 (504.0 b) TX bytes:504 (504.0 b) I have absolutely no clue how to debug this, and I find it very strange since I can still connect via ssh. EDIT: Weirdly, /etc/resolv.conf does not contain any entries, or none that I can make sense of: # Generated by NetworkManager search sui-inter.net # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com So is it possible that rebooting the server erased that file? It worked before at least! And how do I solve this? By the way, pinging an IP address works.

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • what are these weird IP address connections in resource monitor?

    - by bill
    I decided to check out Resource Monitor (on the 'Performance' tab in Task Manager, Windows 7) and I noticed in the "Network" section that the 'System' image name kept making a bunch (~5 at a time) of connections to random IP addresses, it would show anywhere from 1-500 bytes/sec 'sent'. They would stay connected for 1-2 minutes. -All web browsers are closed So, first thing I did was run a trace from network-tools.com on some of these IP addresses. 8/10 were outside of US and did not resolve to any host name. Of the 10 IP addresses I traced, 2 were in US, 4 showed origins in China, and one each to Algeria, Russia, Pakistan, Korea. (!) So, the next thing I did was turn off my wireless card, watch the connections disappear, then turn the card back on, and within 30 seconds more random connections were created by System, with different IP addresses from the first time. The next thing I did was go open Task Manager, Show Processes From All Users, then I killed just about everything that wasn't (what appeared to be) a windows process. Turned on wi-fi, and again within 30 seconds, random IP addresses connect for ~ 1 min at a time, new ones coming and going. I occasionally use bit torrent on this machine, but there was definitely no process that seemed related to bt running after I went through task manager, and bt wasn't open to begin with. So, any ideas on what these connections might be for? I have been using Ad-Aware Free and AVG Free on this computer for a while now, always up to date..

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

  • How best to handle end user notification in the event of system failure incl. email?

    - by BrianLy
    I've been asked to research ways of handling end user notifications when systems such as email are experiencing problems. Perhaps an example will make this a little clearer. We have a number of sites in different countries. Recently email was impacted at one of the sites, but it could have been a complete network outage. Information was provided by phone to local IT managers at the site but onward communication was slower than some would have liked. It seems like almost everyone at the site has a personal mobile phone which could receive text messages, and perhaps access a remote website with postings on the situation. However managing and supporting a system to text people on these relatively infrequent occasions would be very costly to do internally. What are other people doing to handle situations like this? Some things I've thought of include: Database of phone numbers to text. Seems costly and not very easy to maintain for an already stretched IT group. Is there an external service that would let you do this policies? Send voicemail message to all phones on site. Maintain an external website. This would not work in all situations (network failure), and there is a limit on the amount of info that can be posted externally. A site outage could be sensitive information in some situations. How could the site be password protected? Maybe OpenId/Facebook connect would work. Use a site like Yammer.com which is publicly accessible but only by people with a company email address. Anyone using this for IT outage notifications? To me it looks like there is no clear answer, and that there are solutions for some subsets of users. To be comprehensive a number of solutions would need to be combined. Any additional thoughts or recommendations? What worked or didn't work for your organization?

    Read the article

  • Different subnets routing with just one layer 3 switch

    - by GustavoFSx
    Our current network looks like this: Location 1: 2 Layer 2 switches | subnet 192.168.1.0/24 | Firewall for our VPN Location 2: 1 Layer 2 switch | subnet 192.168.3.0/24 | Firewall for our VPN Location 3: 1 Layer 2 switch | subnet 192.168.5.0/24 | Firewall for our VPN We just got a direct fiber connection between location 1 and 2, we also got a new HP V1910 24G layer 3 switch. I tried to follow the instructions on this site, but I can't get it to work. I think our network should look like this: Location 1: HP Switch FIBER to L2 | subnet 192.168.1.0/24 | Firewall for our VPN Location 2: 1 Layer 2 switch | subnet 192.168.3.0/24 | FIBER to L1 Location 3: 1 Layer 2 switch | subnet 192.168.5.0/24 | Firewall for our VPN So, how can I get routing working on our location 2? It's old gateway was a firewall device on ip 192.168.3.1. I'm thinking on creating a VLAN Interface on 192.168.3.1 on the switch for the Location 2. But how will I handle that on the HP switch that has a direct fiber connection with that switch? Please help, I'm not very good with networking.

    Read the article

  • Can not connect remotely to MySQL Server on Ubuntu 10.10

    - by BobFranz
    Ok I have searched google for two days trying to get this to work. Here are the steps I have taken so far: Clean install of Ubuntu 10.10 Install mysql 5.1 as well as admin Comment out the bind address in the config file Create a new database Create a new user that is username@% to allow remote connections Grant all access to this user to the new database EXCEPT the grant option Login on the server is ok using this new user and database on the localhost Login on the server is ok using this new user and database on the server internal network ip Login from a remote computer is ok using this new user and database using the internal network ip Login is not working when logging in with this username and database using the external ip address from the server or the remote computer. I have port forwarding enabled for this port and it is viewable from outside as confirmed by canyouseeme.org I have nmap'd using the following command on the internal ip and get the below result: nmap -PN -p 3306 192.168.1.73 Starting Nmap 5.21 ( http://nmap.org ) at 2011-02-19 13:41 PST Nmap scan report for computername-System-Name (192.168.1.73) Host is up (0.00064s latency). PORT STATE SERVICE 3306/tcp open mysql Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds I have nmap'd using the following command on the internal ip and get the below result(I have hidden ip for obvious reasons): nmap -PN -p 3306 xxx.xxx.xx.xxx Starting Nmap 5.21 ( http://nmap.org ) at 2011-02-19 13:42 PST Nmap scan report for HOSTNAME (xxx.xxx.xx.xxx) Host is up (0.00056s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds I am completely stuck here and need some help. I have tried everything under the moon and still can not connect from a remote external ip address. Any help is greatly appreciated and I need to do anything to help find the problem let me know and I will post the results here.

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • Why does the Mac OS X firewall dialog recurringly pop-up and disappear by itself (without letting me

    - by Chris W. Rea
    From time to time, I'll be on my Macbook using a program that accesses the network – whether Firefox, or Sony's Reader Library – really, it seems like it could happen with any program that accesses the network – and for no reason that I can discern so far (that is, it happens intermittently) the OS X firewall dialog pops up to ask me the question: Except it doesn't actually let me click anything before it disappears! That is: the dialog pops up, ... then goes away by itself a second later, then pops up again, ... then goes away by itself a second later, etc. It happens a few times before stopping. It wouldn't be so maddening to be interrupted if I could just be allowed to click "Allow" and make the darn thing go away for good. In Security preferences I have the firewall turned "On", and I would like to keep it that way. Has anybody seen this problem, found the source, and figured out a solution or workaround? Thank you.

    Read the article

  • Can only ssh when not using wifi

    - by AChrapko
    So I have 3 machines, a windows 7 desktop that is always wired to my router, osX laptop, and raspberry pi running debian linux. My router is a Linksys e1000 wireless N. My goal is to be able to ssh the raspi from any machine, while it is connected via wifi. My problem is that when trying to ssh from either the win7 or osX to the Pi it either times out, or gives an error: "ssh: connect to host 192.168.1.### port 22: No route to host" The only times that I have managed to connect to the pi from any machine were when it connected to the router via an Ethernet cable. Currently with win7 desktop wired, macbook wireless, and pi wireless tests give the following: win7 ping macbook: Destination host unreachable. macbook ping win7: Request timeout. win7 ping pi: Destination host unreachable. macbook ping pi: Request timeout. blah blah blah Plugging the macbook into the router with an Ethernet cable all communication between win7 and macbook works. Pings, ssh, ftp, smb ect... No changes to the pi, still no connections possible to or from any of the other 2 machines. Note All machines, are able to connect to the internet and ssh to the same machine on a completely different network, wired or over wifi. Plugging the Pi in with Ethernet (and macbook still wired) I can ssh to the pi from both win7 and macbook. I can ssh from the pi to macbook. All machines still able to connect the the off network machine. Also another little side note- I was playing warcraft 3 with my roommates the other day, and the only time they were able to see my LAN game was when they were plugged into the router with an Ethernet cable. Once or twice one of the laptops was able to connect over wifi, but not without another computer connecting first via Ethernet. So basically does anyone have any info as to why my router seems to completely ignore local wireless traffic?

    Read the article

  • Website always having DNS problems

    - by Root
    I moved my website from shared hosting to VPS. When it was in shared hosting all I did is updated my name servers whereas now I got my own VPS server and I used one of my domain sjdpublishing.com as the primary domain for my VPS. I created nameservers as ns1.sjdpublishing.com and ns2.sjdpublishing.com and then my actual website is creativeproperty.com.au which are pointing to ns1.sjdpublishing.com and ns2.sjdpublishing.com I am having repeated problems with my domain creativeproperty.com.au a few weeks back I had a problem which was resolved by flushing DNS and later I got similar problem which was not resolved by flushing DNS, I posted a question here and someone answered me to go to Network Settings in my MAC OSX and remove the IP as in my MAC terminal nslookup creativeproperty.com.au points to my router IP and I fixed this problem Now many of my clients were complaining that they are having same troubles accessing my website. I don't know whether its to flush DNS or change network settings or other issues. Can anyone please check my domain creativeproperty.com.au and sjdpublishing.com are having correct records or not and also can anyone tell me the best solution for this issue?

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >