Search Results

Search found 5318 results on 213 pages for 'managed'.

Page 140/213 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Linux group permissions getting overwritten by owner

    - by Andy
    I am not a user of Linux, but I am encountering some permissions problems with it that I hope someone can shed some light on. Bit of background: A colleague of mine has a Linux box (running Debian I believe) with an SVN repository on it. The repository directory and files 'owner' is my colleauge. We are both members of a group called 'users'. He manages several projects both Linux and Windows apps, while I have one Windows app. For the Windows apps, we both use TortoiseSVN via an SSH link to commit/update. Performing the command 'ls -l' shows the repository files and folders on the Linux box to have the following permissions: -rwxrwx--- john users However, when my colleauge commits to the repository, the permissions change to: -rwxrwx--- john john This then means I get 'Permission denied' when trying to access the repository myself as it appears that the group permissions have been overwritten with only 'owner' permissions. To fix this, a 'chown -R' command is applied to the files/folders to set the permissions back to owner/group, but each time he writes to the repository, the issue repeats. I'm not sure if this is solely an SVN problem, or a more fundamental owner/group issue. Anyone any clue on how to stop this happening, or where to go and look? I'm trying to help out my colleague who is having some trouble resolving this issue. Apologies for the vague info, I hope I have conveyed the problem clear enough. Like I say, I am not a Linux user, I have only put down what I have managed to pick up from looking over his shoulder. Thanks for any pointers I can pass on!

    Read the article

  • WTH? Upgrading Ubuntu 9.10 to 10.04 Problem: No Internet Connection?

    - by damx
    Earlier today I tried upgrading my Ubuntu 9.10 Karmic Kaola from the update manager. (FYI: all 9.10 updates were applied prior to this) Everything was going well downloading until I got an error dialog informing me that some software packages weren't downloaded because of an Internet connection. Would say it was halfway thru. Anyway, Was told that the software packages that it did download, however, would kept and I figure it's not a big deal. Just run it again. But first ran Firefox to verify my connection as I haven't had any connection problems. But my internet connection was/is fine as evident by this posting. With that cleared, I ran the update manager again, clicked on "Upgrade" and this time I received "Could not download the release notes. Please check your internet connection" huh? This is my first time dealing w/ Ubuntu and my first upgrade so I am hoping someone can help. Not sure what the problem can be. I am can surf the web w/ no problems. Please help. PS: There was no installing at any point. Just downloading. PSS: The software it managed to download the first time around is now visible in the update manager but don't think I should install as I see in the compiz description it's for v1:0.8-4-0ubuntu2 I figure it's designed for 10.04 and might ruin things further if I install

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • Disk IO slow on ESXi, even slower on a VM (freeNAS + iSCSI)

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas 8.0.4). Those two machines are connected to each other with Gigabit ethernet. The raid-z volume is divided into three parts: two zvols, shared with iscsi, and one directly on top of zfs, shared with nfs and similar. I ssh'd into the freeNAS box, and did some testing on the disks. I used ddto test the third part of the disks (straight on top of ZFS). I copied a 4GB (2x the amount of RAM) block from /dev/zero to the disk, and the speed was 80MB/s. Other of the iSCSI shared zvols is a datastore for the ESXi. I did similar test with time dd .. there. Since the dd there did not give the speed, I divided the amount of data transfered by the time show by time. The result was around 30-40 MB/s. Thats about half of the speed from the freeNAS host! Then I tested the IO on a VM running on the same ESXi host. The VM was a light CentOS 6.0 machine, which was not really doing anything else at that time. There were no other VMs running on the server at the time, and the other two "parts" of the disk array were not used. A similar dd test gave me result of about 15-20 MB/s. That is again about half of the result on a lower level! Of course the is some overhead in raid-z - zfs - zvolume - iSCSI - VMFS - VM, but I don't expect it to be that big. I belive there must be something wrong in my system. I have heard about bad performance of freeNAS's iSCSI, is that it? I have not managed to get any other "big" SAN OS to run on the box (NexentaSTOR, openfiler). Can you see any obvious problems with my setup?

    Read the article

  • Upgrading Ubuntu 9.10 to 10.04 Problem: No Internet Connection?

    - by damx
    Earlier today I tried upgrading my Ubuntu 9.10 Karmic Kaola from the update manager. (FYI: all 9.10 updates were applied prior to this) Everything was going well downloading until I got an error dialog informing me that some software packages weren't downloaded because of an Internet connection. Would say it was halfway thru. Anyway, Was told that the software packages that it did download, however, would kept and I figure it's not a big deal. Just run it again. But first ran Firefox to verify my connection as I haven't had any connection problems. But my internet connection was/is fine as evident by this posting. With that cleared, I ran the update manager again, clicked on "Upgrade" and this time I received "Could not download the release notes. Please check your internet connection" This is my first time dealing with Ubuntu and my first upgrade so I am hoping someone can help. Not sure what the problem can be. I can surf the web with no problems. PS: There was no installing at any point. Just downloading. PSS: The software it managed to download the first time around is now visible in the update manager but don't think I should install as I see in the compiz description it's for v1:0.8-4-0ubuntu2 I figure it's designed for 10.04 and might ruin things further if I install.

    Read the article

  • Nginx proxy with Redmine SVN authentication.

    - by Omegaice
    I am attempting to setup a system where I have an nginx server running as a reverse proxy for multiple websites that I want to run. To separate the websites I have created a Linux container which contains each site to allow me to reduce conflicts in database usage etc. I am currently trying to get my main site working and have nginx with passenger setup and connecting to redmine and I have an Apache install specifically setup for serving the SVN over HTTP and am attempting to use the redmine authentication with that. I have set everything up as described in the redmine howtos, but when I check a project out from the SVN it always works even if the project is private and whenever I try and commit to the repositories it fails saying "Could not open the requested SVN filesystem", the Apache error log related to that event is "(20014)Internal error: Can't open file '/srv/rcs/svn/error/format': No such file or directory". If I take out the redmine authentication I can checkout and check-in repositories fine but there is no authentication. Does anyone have any ideas? Edit I tried to solve this problem another way by attempting to have the authentication work by LDAP, I managed to get it so that my user could log into the redmine website but as soon as I tried to check anything out it said that access was forbidden to the repository.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • How-To Configure Weblogic, Agile PLM and an F5 LTM

    - by Brian Dunbar
    Agile, Weblogic, and an F5 walk into a bar ... I've got this Agile PLM v 9.3 Running on WebLogic, two managed servers. An F5 BigIP LTM. We're upgrading from Agile v 9.2.1.4 running on OAS. The problem is that while the Windows client works fine the Java client does not. My setup is identical to one outlined in F5's doc: http://www.f5.com/pdf/deployment-guides/bea-bigip45-dg.pdf When I launch the java client it returns this error "Server is not valid or is unavailable." Oracle claims Agile PLM is setup correctly, but won't comment on the specifics of the load balancer. F5 reports the configuration is correct but can't comment on the specifics of the application. I am merely the guy in a vortex of finger-pointing who wants my application to work. It's that or give up on WLS and move back to OAS. Which has it's own problems but at least we know how it works. Any ideas?

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • W2K INACCESSIBLE_BOOT_DEVICE, with System Commander

    - by Gary Kephart
    I have a system that was originally had Win NT. I added System Commander (SC7) and then added W2K. The relevant partitions are: 0 - Primary - MultiFAT (Has Win NT, mapped to C:) 1 - Extended - with many logical partitions: 1.1 NTFS which has W2K and is mapped to D: 1.2 other logical partitions which are irrelevant to this D: was getting full. It needed room for virus definitions and Windows upgrades. In the past, I had simple used SC7 to resize D: without problems. So I did it again this time. However, upon finishing, I got the message "Unable to create partition". It also marked the partition as unformatted. I checked that the files on the disk were still there using SC7's Partition Explorer, and they were there. I continued and the system managed to boot up fine anyways. Then I rebooted the system again. This time, I got a message saying "INACCESSIBLE_BOOT_DEVICE". I went back in to SC7 and to Partition Commander, and it was still saying that the partition was unformatted but the Partition Explorer still showed the files on the system. I finally decided to resize the partition again, figuring that this would force a rewrite of the partition information. That seemed to work, until I had to reboot again. Now I can't see the files using Partition Explorer, and the Resize button is now disabled. What now?

    Read the article

  • Protocol to mount fat32 network filesystem on Linux with ability to lock files ( not advisory locks

    - by nagul
    I have a fat32 filesystem sitting on a NAS storage device (nslu2) that I need to mount on my Ubuntu system. I've tried Samba and NFS mounts, but both don't seem to support proper locking. More specifically, I am unable to save files to the mounted drive through GNUcash, KeepassX etc, which makes the share fairly useless. Is there a protocol that allows me to achieve this ? Note that the NAS storage device is running a linux OS so I can run pretty much any protocol that has a linux implementation. The only option I'm not looking for is to reformat the partition to ext3, which I'm not able to do due to other constraints. Alternatively, has anyone managed proper locking of a fat32 system over the network using Samba ? Or, is advisory locking the best you get with a network-mounted fat32 file system ? I've thought of trying sshfs but I've not found any indication that this will solve my problem. Edit: Okay, maybe I can reformat the drive, but to any file system except ext3. The "unslung" nslu2 doesn't like more than one ext3 drive, and I already have one attached. So any solution that involves reformatting the drive to ntfs, hfs etc is fine, as long as I can mount it on linux and lock files.

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • Windows Server 2008 R2 RAS VPN: access server on internal interface ip

    - by Mathias
    short question: I'm usually a linux admin but need to setup a Win2k8 R2 server for a student project. The server is running as VM on a root server and has a public internet IP assigned. Additionally I need a VPN server to access some services running on the server. I managed to set up a working VPN gateway via the Routing and RAS service which assigns clients an IP in the private subnet 192.168.88.0/24 with the Interface "Internal" listening on 192.168.88.1. Additionally I set up the external interface as NAT interface. So I can connect to the VPN server, get an IP assigned and the server additionally does NAT and I can access the internet over the VPN connection. The only thing I additionally need, is that I can access the server itself over that internal IP (e.g. client 192.168.88.2, server 192.168.88.1) as I want to access some services which I don't like to expose to the internet and restrict them to connected VPN clients. Does anybody have a hint, which configuration I'm missing here to be able to access the server over the VPN connection? EDIT: VPN clients get assigned the IP from the private subnet with subnetmask 255.255.255.255, I guess that might be the reason I can't access the server on the private IP address although it's in the same network range. Any ideas how to change this? I defined a static address pool in the Routing and RAS service, but I can't change the netmask there. EDIT2: I can't access the server from the client, but I can fully access the client from the server (ping, HTTP). I guess it has to do with firewall configuration. Thanks in advance, Mathias

    Read the article

  • SSL setup: UCC or wildcard certificates?

    - by quanza
    I've scoured the web for a clear and concise answer to my SSL question, but to no avail. So here goes: I have a web-service requiring SSL support for authentication pages. The root-level domain does not have the "www" - i.e., secure://domain.com - but localized pages use "language-code.domain.com", i.e. secure://ja.domain.com So I need at least a wildcard SSL certificate that supports secure://*.domain.com However, we also have a public sandbox environment at sandbox.domain.com, which we also need to support under localized domains - so secure://ja.sandbox.domain.com needs to also work. The previous admin managed to purchase a wildcard SSL certificate for .domain.com, but with a Subject Alternative Name for "domain.com". So, I'm thinking of trying to get a wildcard certificate with SANs defined as "domain.com" and ".*.domain.com". But now I'm getting confused because there seem to be separate SAN certificates, also called UCC certificates. Can someone clarify whether it's possible to get a wildcard certificate with additional SAN fields, and ultimately what the best way is to support: secure://domain.com secure://.domain.com secure://.*.domain.com with the fewest (and cheapest!) number of SSL certificates? Thanks!

    Read the article

  • adding or routing additional domain email addresses

    - by Mustafa Ismail Mustafa
    We have exchange 2007 and we bought a new domain name and we're still keeping the old one so that we can wean everyone off of the old emails. Now, I'm wondering how to go about this. I need to add the new domain as accepted and authoritative by the exchange server. Emails on the new domain need to get routed to the inbox and ditto the old emails, however, I want to be able to change the reply-to in the header to the new email address automatically. I also want to set the new email addresses as the defaults. Ideally, I'd like to be able to add a message at the bottom of every externally outgoing email saying that the new email is [email protected]. But this is a nice to have, certainly not a must have. I've added the new domain as authoritative, and managed to change the primary smtp email addresses to the new one, but sent emails are not being routed to them and neither are the old email addresses! Now how the heck would I go about fixing all of that? I'm completely stumped! TIA

    Read the article

  • Installing Silverstripe on 000webhost.com (free web host)

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it just refreshes the install.php page. It doesn't even overwrite the .htaccess file. 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how. EDIT: I managed to get to the next page but now there is another warning which is stopping it installing: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701

    Read the article

  • Mac OSX: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my mac. I've read this, and I am aware of the whole macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real ass if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail app's, but please include that in your answer for anyone who might be interested. Cheers!

    Read the article

  • Assets not served - Apache Reverse proxy - Diaspora

    - by Matt
    I have succeeded in installing Diaspora* on my subdomain diaspora.mattaydin.com. I have VPS running CentOS 5.7 with Plesk installed. By means of an vhost.conf and vhost_ssl.conf file I, (with the help of another gentleman) have managed to reverse proxy the app. vhost.conf: ServerName diaspora.mattaydin.com ServerAlias *.diaspora.mattaydin.com <Directory /home/diaspora/diaspora/public> Options -Includes -ExecCGI </Directory> DocumentRoot /home/diaspora/diaspora/public RedirectPermanent / https://diaspora.mattaydin.com vhost_ssl.conf ServerName diaspora.mattaydin.com DocumentRoot /home/diaspora/diaspora/public RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://upstream%{REQUEST_URI} [P,QSA,L] <Proxy balancer://upstream> BalancerMember http://127.0.0.1:3000/ </Proxy> ProxyRequests Off ProxyVia On ProxyPreserveHost On RequestHeader set X_FORWARDED_PROTO https <Proxy *> Order allow,deny Allow from all </Proxy> <Directory /home/diaspora/diaspora/public> Options -Includes -ExecCGI Allow from all AllowOverride all Options +Indexes </Directory> DocumentRoot /home/diaspora/diaspora/public Basically it's working. However, the only thing that's not working are the assets. The do not get loaded not the server, as seen on diaspora.mattaydin.com The error messages I get in the access_ssl.log are a lot of: 11/Dec/2012:19:04:05 +0100] "GET /robots.txt HTTP/1.1" 404 2811 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/536.26.17 (KHTML, like Gecko) Version/6.0.2 Safari/536.26.17" The error messages I get from diaspora's log file is: Started GET "//assets/branding/logo_large.png" for 77.250.99.193 at 2012-12-11 20:13:11 +0100 ActionController::RoutingError (No route matches [GET] "/assets/branding/logo_large.png"): lib/rack/chrome_frame.rb:39:in call' lib/unicorn_killer.rb:35:incall' Hope you guys can help me out. If you need anything else please let me know Thanks in advance, Matt

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • How to set up a serial connection to a Windows 7 computer

    - by oli_arborum
    I need to set up a "dial in" connection to a Windows 7 (Ultimate) computer via a serial null-modem cable to be able to connect from a Windows XP client to that computer and exchange data over IP. Question 1: How do I do that? I did neither find the information via Google nor in the MSDN. Seems like no one tried ever before... ;-) I already managed to install a legacy modem device called "Communications cable between two computers" and found the menu entry "New Incoming Connection..." in Network and Internet Network Connections. When I finish this wizard I get the message that the "Routing and Remote Access service" cannot be started. In the event viewer I see the following error messages: "The currently configured authentication provider failed to load and initialize successfully. The requested name is valid, but no data of the requested type was found." (Source: RemoteAccess, EventID: 20152) "The Routing and Remote Access service terminated with service-specific error The requested name is valid, but no data of the requested type was found." (Source: Service Control Manager, EventID: 7024) The Windows 7 installation is "naked", i.e. no additional software or services are installed. Question 2: Am I on the right path to set up the connection? Question 3: How can I get the Routing and Remote Access service running?

    Read the article

  • Windows Server 2008 R2 RAS VPN: access server on internal interface ip

    - by Mathias
    Hey, short question: I'm usually a linux admin but need to setup a Win2k8 R2 server for a student project. The server is running as VM on a root server and has a public internet IP assigned. Additionally I need a VPN server to access some services running on the server. I managed to set up a working VPN gateway via the Routing and RAS service which assigns clients an IP in the private subnet 192.168.88.0/24 with the Interface "Internal" listening on 192.168.88.1. Additionally I set up the external interface as NAT interface. So I can connect to the VPN server, get an IP assigned and the server additionally does NAT and I can access the internet over the VPN connection. The only thing I additionally need, is that I can access the server itself over that internal IP (e.g. client 192.168.88.2, server 192.168.88.1) as I want to access some services which I don't like to expose to the internet and restrict them to connected VPN clients. Does anybody have a hint, which configuration I'm missing here to be able to access the server over the VPN connection? EDIT: VPN clients get assigned the IP from the private subnet with subnetmask 255.255.255.255, I guess that might be the reason I can't access the server on the private IP address although it's in the same network range. Any ideas how to change this? I defined a static address pool in the Routing and RAS service, but I can't change the netmask there. EDIT2: I can't access the server from the client, but I can fully access the client from the server (ping, HTTP). I guess it has to do with firewall configuration. Thanks in advance, Mathias

    Read the article

  • remove tasksel lamp-server

    - by RickyA
    I was tricked in running "sudo tasksel install lamp-server" on the wrong server (UBUNTU 10.4). Now I am stuck with a system where apache won't start because of a "Address already in use: make_sock: could not bind to address 0.0.0.0:80" error. I now want to remove this task, but documentation on that crappy tasksel says you cant use it to uninstall stuff (!!!???). My question is where can I see what packages it installed, and how can I get rid of a selection of them (apt-get?). I want to keep apache, but mysql, php and the other stuff can go... [edit] I managed to get rid of most of the lamp stack. (/var/logs/dpkg.log is usefull for recently installed packages). However it did something in a configuration somewhere, and now two apache intstances start at boottime. Killing the first one and starting a new one gets rid of the "could not bind at adress..." error. Does anyone know where the startup of the first one is configured?

    Read the article

  • Failed to su after making a chroot jail

    - by arepo21
    On a 64 bit CentOS host I am using script make_chroot_jail.sh to put a user in a jail, not permitting it to see anything expect it's home at /home/jail/home/user1. I did it typing this: sudo ./make_chroot_jail.sh user1 after, when trying to connect to user1 first i was getting an error like: /bin/su: user guest does not exist i have fixed this by copying some missed libraries: sudo cp /lib64/libnss_compat.so.2 /lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /lib64/libxcrypt.so.2 /home/jail/lib64/ sudo cp -r /lib64/security/ /home/jail/lib64/ But now, when trying to connect to user1 typing su user1 and then typing it's password, i am getting this error: could not open session So the question is how to connect to user1 in this situation? P.S. Here are the permissions of some files, this might be helpful in order to provide a solution: -rwsr-xr-x 1 root root /home/jail/bin/su drwxr-xr-x 4 root root /home/jail/etc -rw-r--r-- 1 root root /home/jail/etc/pam.d/su -rw-r--r-- 1 root root /home/jail/etc/passwd -rw------- 1 root root /home/jail/etc/shadow UPDATE1 After some modifications i managed to connect to user1, but the session closes immediately! I guess this a PAM issue, however cant find a way to fix it. Here the log entry for close action from /val/log/secure: Oct 6 15:19:42 localhost su: pam_unix(su:session): session closed for user user1 What makes the session to exit immediately after launching?

    Read the article

  • What command should be used to connect via SSH through squid proxy?

    - by Raul Cardoso
    I have set up a Squid HTTP Proxy (in centOS) and intended to use it also for ssh connections. Managed to configure putty (in a windows client) to use this proxy while connecting by ssh. Confirmed in the "target host" that the ssh connection was coming from Proxy server ip instead of the windows client IP. Used: targethost: 22 for ssh proxyserv: 3128 for proxy (along with proxy credentials) I'm now having problems connecting to the "target host" using Ubuntu and the same proxy server. I have tried the following: me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 Enter proxy authentication password for test@proxyserv: SSH-2.0-OpenSSH_6.2p2 Ubuntu-6 It hangs in last line, expecting some input. All attempts resulted in a "Protocol mismatch." error. Putty successfully connects to the http proxy and sends credentials, showing me ssh login right away. - How to do (with commands in Ubuntu) the same putty does? - Is there any other way than connect-proxy command to do this? Edit: Also tried the following with same result ("Protocol mismatch") me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 ssh -l myshel_login Thanks in advance Edit: Solution details (thanks to NickW pointing the right way) installed corkscrew and added to ssh_config Host targethost ProxyCommand corkscrew proxyserv 3128 %h %p /etc/ssh/proxypass created proxypass file login:password Restarted ssh and used a simple ssh command ssh mylogin@targethost (ssh password was asked as usual)

    Read the article

  • Use Mac OS X Server As Development Environment

    - by macinjosh
    I've installed Mac OS X Server 10.6.3 on my laptop to use as my normal OS. I do a lot of web development and thought it would be handy to run OS X Server so I could more easily manage my local development environment (Apache Virtual Hosts, Hostnames for each local site, etc). I'm really enjoying the new setup except for one problem. DNS. My ideal situation would be to add a site (some-site.local) in the Web Service and then go to the DNS Service and add a primary record for the new site. I actually got this working at one point but after a reboot it stopped working! The records look the same as they did before the reboot but the site doesn't come up in Safari. Here is a list of my needs: Need to be able to add new domains at a whim Domains always map to a site on the same box's Web Service Local & External IPs often change It would nice if it worked on any network (i.e. WiFi at the airport or coffee shop) Sites only need to be accessible locally Configuration should stay put even after rebooting I've done some googling and used this as a bit of guide. In the past I've used MAMP and then just a local Apache/PHP/MySQL install with a manually managed hosts file. I'd rather not go back.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >