Search Results

Search found 12775 results on 511 pages for 'remote validation'.

Page 442/511 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • IPSec tunnelling with ISA Server 2000...

    - by Izhido
    Believe it or not, our corporate network still uses ISA Server 2000 (in a Windows Server 2003 machine) to enable / control Internet access to / from it. I was asked recently to configure that ISA Server to create a site-to-site VPN for a new branch in a office about 25 km. away from it. The idea is basically to enable not only computers, but also Palm devices (WiFi-enabled, of course), to be able to see other computers in both sites. I was also told that a simple VPN-enabled wireless AP/router (in this case, a Cisco WRV210 unit) should be enough to establish communications with the main office. To be fair, the router looks easy to configure; it was confusing at first, but further understanding of how site-to-site VPNs work cleared all doubts about it. Now I need to make modifications to our ISA Server in order to recognize the newly installed & configured "remote" VPN site. Thing is, either my Googling skills are pathethically horrible, or there doesn't seem to be much (or any, at all) information about how to configure an ISA Server 2000 for this purpose. Lots of stuff on 2004, of course; also, I think I saw something for 2006. But nothing I could find about 2000. Reading about 2004, it seems that the only way I can do site-on-site with a Cisco router (read: a non-ISA-Server machine) is through something they call a "IPSec tunnel". Fair enough. However, I can't figure for the life of me how could I even start to find, leave alone configure, such a thing. Do you, people, happen to know how to do IPSec tunelling on a ISA Server 2000, so I can connect to a Cisco WRV210 VPN-enabled router, and build a site-to-site VPN for both networks? Or is this not possible at all? (Meaning I should change anything in this configuration to make it work...)

    Read the article

  • Host name change breaking http? Fedora

    - by Dave
    OK so I have been messing around on my development server. It has been a while since I have had my head in linux and I suspect I have broken something. I have SSH running and that is working fine. I also have HTTP and I had FTP running also. Earlier today I decided I wanted to rename the machine so I updated the /etc/hosts file and /etc/sysconfig/network. I also changed the server name in the httpd.conf. I rebooted the machine and reconnected to SSH fine. Later I was messing around with the FTP service (trying to tighten up the user security) and when i tried to connect remotely to FTP no joy, it said cannot connect. I thought that was weird but had planned to remove ftp as we will be using github so removed ftp and moved on. Then I tried to connect to the website but major fail. even connecting to the IP address is failing. I used lynx to connect to the localhost and there was my site so something going on at server level. I thought maybe something up with iptables but I have not changed them but tried adding http but still no joy. I have a - Fedora release 17 (Beefy Miracle) NAME=Fedora VERSION="17 (Beefy Miracle)" ID=fedora VERSION_ID=17 PRETTY_NAME="Fedora 17 (Beefy Miracle)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:17" Fedora release 17 (Beefy Miracle) Fedora release 17 (Beefy Miracle) Linux version 3.3.4-5.fc17.x86_64 ([email protected]) (gcc version 4.7.0 20120504 (Red Hat 4.7.0-4) (GCC) ) #1 SMP Mon May 7 17:29:34 UTC 2012 This is my iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Like I say I can use SSH no issue but http although running is a no go from a remote computer. Any ideas?

    Read the article

  • Windows 7 ssh file server.

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • Unable to logon using terminal server connection

    - by satch
    I have several W2K3 SP2 servers, admin TS enabled. I discovered this morning, I was unable to logon into some of them. I've a couple of Citrix servers in different farms, a SAP (IA64) app server and a cvs server. All of them show same sympthoms; remote connections are refused. I've been able to logon locally, and terminal server service is up, there are no users (so connections are not depleted). There are no errors in log in most servers. One of the Citrix ones, reported following errors: Event ID 50 Source TermDD Type Error Description The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client. and Event ID 1006 Source TermService Type Error Description The terminal server received large number of incomplete connections. The system may be under attack. Anyway, I suppose these errors appear because server isn't working, and Citrix users try to logon massively. (I nmap'ed server and port seems up). I've solved this problem rebooting before, but with so many servers affected it seems like a crappy workaround. Any idea about troubleshooting it properly? Thanks in advance

    Read the article

  • Transparent proxying leaves sockets with SYN_RCVD in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • Effective Permissions displays incorrect information

    - by Konrads
    I have a security mystery :) Effective permissions tab shows that a few sampled users (IT ops) have any and all rights (all boxes are ticked). The permissions show that Local Administrators group has full access and some business users have too of which the sampled users are not members of. Local Administrators group has some AD IT Ops related groups of which the sampled users, again, appear not be members. The sampled users are not members of Domain Administrators either. I've tried tracing backwards (from permissions to user) and forwards (user to permission) and could not find anything. At this point, there are three options: I've missed something and they are members of some groups. There's another way of getting full permissions. Effective Permissions are horribly wrong. Is there a way to retrieve the decision logic of Effective Permissions? Any hints, tips, ideas? UPDATE: The winning answer is number 3 - Effective Permissions are horribly wrong. When comparing outputs as ran from the server logged on as admin and when running it as a regular user from remote computer show different results: All boxes (FULL) access and on server - None. Actually testing the access, of course, denies access.

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Failure to connect to admin share pops up dialog

    - by Jan
    I'm having an issue with a curious error message when accessing the administrative share on a remote machine. Specifically, the client is logged in as the domain administrator on the machine A, and runs some code that tries to access the admin share on B (a domain member). The access is done in .NET, along these lines (though I am not sure if the method of access makes a difference): string path = @"\\B\admin$"; if (Directory.Exists(path)) { try { path += @"\temp\"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } path += "myfile_remote"; File.Copy("myfile", path); Now, on some machines this fails. That is not a big problem as we have a fallback. I'd like to know why but it is not the real issue. The problem is that running this piece of code causes a dialog box to pop up for the logged-in user on B, saying "network error trying to access \\B\admin$\temp\myfile_remote. Contact the network administrator and ask for the correct permissions". Unfortunately, it is a foreign language Windows so I'll spare you all posting a screenshot. It is skinned like a standard Windows dialog box. Why exactly is that dialog box popping up for the user and is there anything I can do about it? Edit to add: B is a Windows 7 Enterprise installation. The client is not aware of any GPO policies being installed. There is AV from Trend Micro installed.

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • Terminal Server in Windows Server 2003

    - by Hemal
    I have a confusion regarding what I am doing here. At present I have a Windows Server 2003 server with SP2. I have assigned RAS/VPN server role to it (through Manage my server wizard) and in my router, I setup the IP address of my RAS/VPN server as PPTP server. Staff leave their workstations ON all the time and access them from home through RDP. They first connect thorugh VPN & in the RDC they simply type their respective IP or computer name to access the office network from home. Everything works fine so far except: Staff have to leave compuers always ON in the office Speed is very slow depend how many staff members access the VPN network I was told to install and configure Terminal service to improve this situation. I already added TS Role in the server but I don't know how to clients can access the TS server from home or remote location. I really appreciate any good links or guidence from the experts in this group regarding this. Thank you in advance for any replies!

    Read the article

  • Quickly set up a Windows Server and automatically install and configure software

    - by Chris
    Yesterday I spent far too much time downloading and installing software on Windows Server 2008. I only had to install a simple server for SQL Server 2008 Express using Microsoft's Web Platform Installer, then configure it to enable remote connections. Everything had to be attended, wasting my time. On a Linux system, this would be trivial to automate, but this is Windows. I do this very rarely, but in the future I would like to make this take as little time as possible. I could do a disc image with everything I installed and configured, but is there a better way? I know nothing of advanced deployment techniques on Windows. Ideally I would like to be able to remotely re-install the OS, or have an unattended install (which I know is possible). Any tips to make the software I need easier to and install and configure with minimal interaction necessary would be helpful. I don't expect everything I asked for to be possible and easy to do. Basically, If any part of it can be done quicker or at least without user input, that's what I'm looking for.

    Read the article

  • What could cause an 101 Error in WAMP under Windows 7 ?

    - by Brayn
    Hey, I'be been using WAMP for local development for quite a while now but lately I've been getting an Error 101 message when I browse localhost sites. It's possible for this to have appeared after the last WAMP update but I'm not 100% sure on this. If I try again and again, after several page refreshes it works but it's really annoying! The exact error message is: Error 101 (net::ERR_CONNECTION_RESET): Unknown error. This is my configuration: OS: Windows 7 Apache: 2.2.11 PHP: 5.2.9-2 WAMP: 2.0 Also the local scripts connect to a remote MySQL server, they don't use the local MySQL(I don't know if it matters, just though I let you know). I've been looking into the apache logs and I've found the following. It seems that the apache server keeps restarting and I can't figure why: [Wed Oct 14 13:52:30 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:30 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:30 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:30 2009] [notice] Parent: Created child process 6784 [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Child process is running [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Acquired the start mutex. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting 64 worker threads. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting thread to listen on port 80. [Wed Oct 14 13:52:32 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:33 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:33 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:33 2009] [notice] Parent: Created child process 3572 [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Child process is running [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Acquired the start mutex. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting 64 worker threads. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting thread to listen on port 80. Also I've checked Windows Firewall and disabled any other protection that I have on this computer with no improvement. Thanks!

    Read the article

  • Anonymous FTP upload on CentOS 5.2

    - by Craig
    I need to allow users to upload files to an FTP server anonymously. They should not be able to see any other files, or download files. It is a CentOS 5.2 server. I have a separate partition for the the upload area (mounted at /ftp). I have tried to set up vsftpd, followed all the instructions/advice I could find. But, when a user logs in and tries to transfer a file it throws a "553 could not create file." error. If I do a 'pwd' it shows the directory as "/" rather than the anon_root of "/ftp/anonymous". Any attempt to change the remote directory ends with "550 Failed to change directory.". I have a subdirectory "/ftp/anonymous/incoming" that is writable for the uploads SELinux is in permissive mode. I am running version 2.0.5 release 16.el5 of vsftpd. Here is the vsftpd.conf file: anonymous_enable=YES local_enable=YES write_enable=YES local_umask=002 anon_umask=007 file_open_mode=0666 anon_upload_enable=YES anon_mkdir_write_enable=NO dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=inftpadm xferlog_std_format=YES nopriv_user=nobody listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES ftp_username=inftpadm anon_root=/ftp/anonymous anon_other_write_enable=NO anon_mkdir_write_enable=NO anon_world_readable_only=NO dirlist_enable=YES Can anyone help?

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • CUPS printer on Vritual Machine can be access via CUPS admin, but not by XP?

    - by SJaguar13
    I have a Zebra label printer connected to a Linux Mint virtual machine. It was set up with CUPS and a Windows XP computer can then print to it via http://192.168.1.76:632/printers/labelprinter. That was all fine and dandy I then hooked up a Fargo Pro L PVC card printer to a Windows XP virtual machine. I had to disconnect the label printer as the server that hosted both virtual machines only has 1 parallel port. Now I plugged in the Zebra again, and it cannot print from the Windows XP computer anymore. If I go to the CUPS admin panel on the Windows XP computer, I can see it, everything looks fine, and I can send it a test page to print which works. If I try to print from Windows, I get an error that the printer is not found/cannot connect to the server. The only other thing that changed was the firewall on the router to allow remote desktop to another computer from outside the network, but all the firewall stuff was for external use. Nothing affected the IP address of the internal network. The Linux Mint VM also had a PDF pritner that was shared with CUPS. That printer is also down. I tried setting up a new CUPS installation on another VM, and when I go to share it with XP, I get the same error. I don't know what to try. It has access, it can get to the admin from that computer, it seems to be up and ready, but when Windows tries to connect, the printer isn't found even though 4 days ago everything was fine. Any ideas?

    Read the article

  • 2 subnets off of 1 PC with 2 NICs

    - by Jeff
    I have a general setup I'd like to do with some IP cameras. This seems like it will work but I think I may be missing something. Our system consists of a video recorder PC connected to a switch which is connected to a number of IP cameras. I'd like to connect this system into an existing network but I want it on a different subnet. The main reason is that the cameras use a lot of bandwidth that I don't want slowing down the existing network. My idea was to install 2 NICs on the video recorder pc. 1 NIC connects to the existing network on 192.169.1.x for example, and the other NIC connect to the switch with the cameras. This NIC would be 192.168.100.x. Then we could remote to the video recorder PC with a GoToMyPC type thing for administration via the existing network. I've included a diagram of how I see this working but I'm a little fuzzy on the setup of the NICs (if this can work at all). My problem may be trying to deal with 2 subnets without a router but It really doesn't seem like it's necessary in this situation. BTW, gliffy is cool.

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Client unable to access OWA website after temporarily changing SSL certificate on the server

    - by Lorenz Meyer
    I have the following issue: One client computer (Windows XP) cannot access the OWA website. All other client computers can (Except another one in the same remote office). How this happened: I temporarily changed the SSL certificate on the Exchange Server yesterday. After a few minutes, I reverted back, an now the same certificate that was installed for years is back again. During these few minutes, they were in OWA on this computer and got a certificate error. What exactly happens: Internet Explorer displays the error Internet Explorer cannot display the webpage, Firefox displays The connection was reset and Crome shows This webpage is not available. The connection to ... was interrupted. What I already did to try to get this working: Restart the client computer Restart the exchange server Deleted Internet Explorer browsing history In IE, Internet Options, tab Content, under Certificates deletes SSL cache Restored Internet Explorer to the default parameters I looked into certmgr.msc, but did not find a certificate related to the issue What could I do else to narrow down the origin of this problem (or better: resolve it) ? Can you give any advice ?

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • Server not accepting uploads

    - by Tatu Ulmanen
    I'm having a strange problem with my VPS: I can download files from it, I can use PuTTy to connect to it and all behaves normally. But sometimes, when I try to upload a file to the server or save a file via SFTP, the connection inexplicably fails. I am using jEdit to edit files remotely via SFTP. When it works, it works fine. When it doesn't, I get an error message: Cannot save: java.io.IOException: inputstream is closed Cannot save: java.io.IOException: 4: I can see that a temporary save file (#file.php#save#) is created on the server with a filesize of 0. So the connection works, but when it comes to sending the actual data, something fails. The same thing with WinSCP, but the error is different: Copying file fatally failed. Copying files to remote side failed. And I can always browse the server with PuTTy without a problem. I see nothing abnormal in any log files. Auth.log shows this when I try to save: sshd[32638]: Accepted password for - from - port 62272 ssh2 sshd[32638]: pam_unix(sshd:session): session opened for user - by (uid=0) sshd[32640]: subsystem request for sftp sshd[32638]: pam_unix(sshd:session): session closed for user - When I wait for a while (say, an hour), everything works fine again. It can't be a temporary ban, as I am still allowed to connect to the server, right? I know this may not be enough info to solve the problem, but I am grateful for any clues or bits of information that might help me. What are the possible causes for this kind of behaviour, what log files can I check for clues etc.. I'm running out of ideas!

    Read the article

  • Trouble printing to local printer when connected to VPN with split-tunneling enabled

    - by Marve
    I'm a volunteer network admin for a multi-tenant non-profit office space. One of our new tenants uses a VPN to connect to remote resources using RRAS and Small Business Server 2008. They also have a local network printer for the workstations in our office. When connected to the VPN, they cannot print to the local printer. I informed their network admin that they need to enable split-tunneling to fix this. Their network admin enabled split-tunneling, but apparently printing still didn't work. He told me that I need to open port 1723 on our office firewall to allow it to work. I'm just a novice administrator and not familiar with RRAS, but this doesn't sound right to me and I haven't been able to find anything on the web to validate it. Additionally, my understanding of split-tunneling is that it is handled entirely by the VPN client and should work irrespective of firewall settings. Is my understanding of the situation incorrect? What steps should I take to resolve this problem?

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >