Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 293/785 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • Task Scheduler : Logon as Batch Job Rights

    - by Brohan
    I'm trying to set up a scheduled task which will work under the Network Administrators account, whether the account is logged in or not (on a specificed computer) According to the Task Scheduler, I need 'Logon as batch job rights'. Attempting to change this setting in the Local Security Policy window has it the option to add the Administrator account to the groups greyed out. Currently, only LOCAL_SERVICE may Logon as Batch job. Attempting to add administrator to this group hasn't worked. How do I make it able to set this permission so that I can run tasks if I'm logged in or not?

    Read the article

  • FreeBSD rc.d script doesn't work when starting up

    - by kastermester
    I am trying to write a rc.d script to startup the fastcgi-mono-server4 on FreeBSD when the computer starts up - in order to run it with nginx. The script works when I execute it while being logged in on the server - but when booting I get the following message: eval: -applications=192.168.50.133:/:/usr/local/www/nginx: not found The script looks as follows: #!/bin/sh # PROVIDE: monofcgid # REQUIRE: LOGIN nginx # KEYWORD: shutdown . /etc/rc.subr name="monofcgid" rcvar="monofcgid_enable" stop_cmd="${name}_stop" start_cmd="${name}_start" start_precmd="${name}_prestart" start_postcmd="${name}_poststart" stop_postcmd="${name}_poststop" command=$(which fastcgi-mono-server4) apps="192.168.50.133:/:/usr/local/www/nginx" pidfile="/var/run/${name}.pid" monofcgid_prestart() { if [ -f $pidfile ]; then echo "monofcgid is already running." exit 0 fi } monofcgid_start() { echo "Starting monofcgid." ${command} -applications=${apps} -socket=tcp:127.0.0.1:9000 & } monofcgid_poststart() { MONOSERVER_PID=$(ps ax | grep mono/4.0/fastcgi-m | grep -v grep | awk '{print $1}') if [ -f $pidfile ]; then rm $pidfile fi if [ -n $MONOSERVER_PID ]; then echo $MONOSERVER_PID > $pidfile fi } monofcgid_stop() { if [ -f $pidfile ]; then echo "Stopping monofcgid." kill $(cat $pidfile) echo "Stopped monofcgid." else echo "monofcgid is not running." exit 0 fi } monofcgid_poststop() { rm $pidfile } load_rc_config $name run_rc_command "$1" In case it is not already super clear, I am fairly new to both FreeBSD and sh scripts, so I'm kind of prepared for some obvious little detail I overlooked. I would very much like to know exactly why this is failing and how to solve it, but also if anyone has a better way of accomplishing this, then I am all open to ideas.

    Read the article

  • MongoDB on FreeBSD

    - by Hartator
    We are currently using MongoDB 2.0.0 on MacOS but our servers are running FreeBSD. The most recent port of MongoDB is the 1.8.3 version. I have tried to compile the 2.0.0 by hand but I came across errors that I didn't manage to fix. I came across on the Internet a few old resources which are saying that MongoDB does not run well on FreeBSD mainly for performance issue (memory mapped files). Is that true ? Does it mean we have to switch our server to another OS ? Thanks for your opinions! Sources : http://groups.google.com/group/mongodb-user/browse_thread/thread/8131b7e5a5c710d9 http://ivoras.net/blog/tree/2009-11-05.a-short-time-with-mongodb.html

    Read the article

  • VMWare Fusion cannot connect to the NAT connection on my Mac

    - by FFish
    I have been using VMWare Fusion on my Mac to check out my websites on localhost. Now I can't connect anymore with the NAT connection. There seems to be a problem with my IP address or Mac address? I have no idea what causes this, it was working fine before!? In the XP (SP2) VM, in the taskbar I see the Local Area Connection with the yellow warning icon. The bubble says: "This connection has limited or no connectivity. You might not be aisle to access the Internet or some network resources. For more information, click this message." Doing that opens up the Local Area Connection Status panel. In the Support tab, when I click the repair button I get following message: "Windows could not finish repairing the problem because the following action cannot be completed: Renewing IP address." I tried disabling my firewall and also XAMPP that I use as server on OSX. VMWware version: 3.1 VM: XP SP2 Mac OSX 10.6.3 Any help would be greatly appreciated.

    Read the article

  • search solution to integrate community mailing-lists into a website on shared hosting

    - by Thomas Traub
    The community (300 members), cocktailnetwork, has a website, cocktailnetwork.eu and about ten mailing lists. We want to manage the mailing lists from inside the website (lists and subscribers) and link the list's informations with the member profiles on the site. We are on shared hosting. The community members use the lists to send mails to all other members / groups of members. They can subscribe / unsubscribe from a list. The administrators can in addition create / delete / modify lists. Right now I use ezmlm with QmailAdmin, the lists are completly seperated from the website. I could link the data via remote administration commands, but that's not very satisfactory, does not allow the creation of new lists and it's an deprecated feature of our hosting package, sooner or later we'll need to switch anyway. Do You know of an elegant solution for us ? Any web service with a good, stable API ? Thanks.

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • Sharepoint (WSS 3.0) on SBS 2008 broken.

    - by tcv
    I recently ran the Sharepoint Products and Technologies Wizard. I had hoped this would bring up Sharepoint and allow me to access it so I could begin to learn. But it's not working. Here is some data that I hope is relevant. I am doing all my testing on the SBS 2008 server itself. I changed the hostheader in IIS to reflect an external FQDN I plan to deploy. The SBS server is remote and there are no domain-connected workstations. If I browse "localhost" SSL, I can get to the site, albeit with a self-signed cert warning. If I attempt to connect via SSL using either the internal FQDN (.local), the External FQDN (.net) or any other permutation thereof, I am prompted for credentials three times but am not allowed access. My account is a domain admin. The site is inaccessible using port 80 whether using localhost, internal FQDN (.local), and external FQDN (.net) Right now, I suspect my problem is within IIS, but I don't know. My plan to publish the sharepoint site to the web so my partner and I can check documents in/out. Can someone help me get started in current direction?

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • Exchange 2013 attachments too big?

    - by KPS
    I am having the toughest time sending large attachments, everywhere I have checked my file size limit for send/receive is 100mb but yet users are unable to receive files even at the size of 14mb. I'm using a spam filter (Appriver) and have worked with there support for a very long time, we see the following errors in logs 13:32:40.260 4 SMTP-000036([myserverIP]) rsp: 354 Start mail input; end with <CRLF>.<CRLF> 13:33:41.038 3 SMTP-000033([myserverIP]) write failed. Error Code=connection reset by peer 13:33:41.038 3 SMTP-000033([myserverIP]) [659500] failed to send. Error Code=connection reset by peer 13:33:41.038 4 SMTP([myserverIP]) [659500] batch reenqueued into tail Windows firewall is disabled on the exchange server, all other emails that are of smaller value come through just fine. Here is a print out of size limits: ConnectorType ConnectorName MaxReceiveMessageSize MaxSendMessageSize ------------- ------------- --------------------- ------------------ Send InternetSendConnector - 35 MB (36,700,160 bytes) Send Appriver-Smarthost - 35 MB (36,700,160 bytes) Receive Default EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) - Receive Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive ExchangeRelay 100 MB (104,857,600 bytes) - TransportConfig - 100 MB (104,857,600 bytes) 10 MB (10,485,760 bytes) ADSiteLink DEFAULTIPSITELINK Unlimited Unlimited There is a no anti-virus on the server either that could be interfering, I am out of ideas at this point :( EDIT 1 After running BPA, it gives and error: Exchange Organization: Check whether the incoming message(CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local) size isn't set The maximum incoming message size isn't set in organization 'CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local'. This can cause reliability problems. Here are the sizes as of now: [PS] C:\Temp>Get-TransportConfig | ft MaxSendSize, MaxReceiveSize MaxSendSize MaxReceiveSize ----------- -------------- Unlimited Unlimited [PS] C:\Temp>Get-ReceiveConnector | ft name, MaxMessageSize Name MaxMessageSize ---- -------------- Default EXCHSRVR 100 MB (104,857,600 bytes) Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) ExchangeRelay 100 MB (104,857,600 bytes) Again, smaller emails come through just fine. Seems like there is a 10mb receive limit somewhere that I cannot find.

    Read the article

  • MySQL Not Turning On

    - by Shalin Shah
    I have an amazon ec2 instance running on the Amazon Linux AMI and its a micro instance. I wanted to install Django onto my server so I entered these commands wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/go wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/django.conf chmod 744 go ./go So after I was done, I ran sudo service httpd restart and sudo service mysqld restart and This is what came up for mysqld: Stopping mysqld: [ OK ] MySQL Daemon failed to start. Starting mysqld: [FAILED] So I deleted the django files /usr/local/python2.6.8/site-packages/django_registration.egg and I tried finding the error and I found out that in my /etc/my.cnf for the socket, it said socket=/var/lock/subsys/mysql.sock so I went to /var/lock/subsys/ and there was no mysql.sock. I tried creating one using vim but it still didn't work. Then I checked the error log and it said Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I am pretty much lost right now. I know it has something to do with mysql.sock If you might know a reason why this was caused could you please let me know? I have a wordpress site on my server, so i kind of need MySQL to work. Thanks!

    Read the article

  • Network Load Balancing and AnyCast Routing

    - by user126917
    Hi All can anyone advise on problems with the following? I am planning on installing the following setup on my estate: I have 2 sites that both have a large amount of users. Goals are to keep things simple for the users and to have automatic failover above the database level. Our Database will exist at the primary site and be async mirrored to the secondary site with manual failover procedures.The database generate sequential ID's so distributing it is not an option. I plan to site IIS boxes at both sites with all of the business logic on them and heavy operations. The connections to SQL will be lightweight and DB reads will be cached on IIS. On this layer I plan to use Windows network load balancing and have the same IP or IPs across all IIS boxes at both sites. This way there will be automatic failover and no single point of failure. Also users can have one web address regardless of which site they are in automatically be network load balanced to their local IIS. This is great but obviously our two sites are on different subnets and as this will be one IP address with most of our traffic we can't go broadcasting everything across the link between the sites. To solve this problem we plan to use AnyCast routing over our network layer to route the traffic to the most local box that is listening which will be defined by the network load balancing. Has anyone used this setup before? Can anyone think of any issues with this? Also some specifics I can't find anywhere at the moment. If my Windows box is assigned an IP and listening on that IP but network load balancing is not accepting specific traffic then will AnyCast route away from that? Also can I AnyCast on a socket level?

    Read the article

  • Weird Windows 2003 MSDTC and SQL 2005 issue

    - by seagull surfer
    scenario: Windows 2003 sp2 x64 enterprise edition. SQL 2005 sp2 cu9 x64 Enterprise edition After restarting the resource groups on two node active-active cluster, 3 SQL 2005 instances start up fine. The 4th one starts up but starts throwing the following error. "Enlist operation failed: 0x8004d00e(XACT E NOTRANSACTION). SQL Server could not register with Microsoft Distributed Transaction Coordinator (MS DTC) as a resource manager for this transaction. The transaction may have been stopped by the client or the resource manager." MSDTC is fine since the other 3 function normally. The only way to "fix" it is to take the 4th instance offline and bring it online again. Is there any way to fix this enlistment without restarting?

    Read the article

  • How can I enable anonymous access to a Samba share under ADS security mode?

    - by hemp
    I'm trying to enable anonymous access to a single service in my Samba config. Authorized user access is working perfectly, but when I attempt a no-password connection, I get this message: Anonymous login successful Domain=[...] OS=[Unix] Server=[Samba 3.3.8-0.51.el5] tree connect failed: NT_STATUS_LOGON_FAILURE The message log shows this error: ... smbd[21262]: [2010/05/24 21:26:39, 0] smbd/service.c:make_connection_snum(1004) ... smbd[21262]: Can't become connected user! The smb.conf is configured thusly: [global] security = ads obey pam restrictions = Yes winbind enum users = Yes winbind enum groups = Yes winbind use default domain = true valid users = "@domain admins", "@domain users" guest account = nobody map to guest = Bad User [evilshare] path = /evil/share guest ok = yes read only = No browseable = No Given that I have 'map to guest = Bad User' and 'guest ok' specified, I don't understand why it is trying to "become connected user". Should it not be trying to "become guest user"?

    Read the article

  • Server Manager from Windows 2008 to Hyper-V 2008 R2?

    - by Roger Lipscombe
    My workstation is running Windows Server 2008. I do not have local admin privileges. I have a Hyper-V Server 2008 R2 (i.e. Core+Hyper-V) box. On that box, I do have local admin privileges. I can Remote Desktop to the box; Hyper-V Manager works fine (outside of Server Manager). It's just that there are some things that are easier to do in Server Manager (partition disks, etc.) than at the command line. I'd like to use Server Manager on my workstation to manage the Hyper-V box. However: When I run Server Manager on my workstation, it prompts for elevation, and won't then let me connect to another server. If I attempt to run MMC and then add "Server Manager" as a Snap-in, it doesn't prompt me for the server name. Then it complains that I'm not an Administrator. It doesn't provide for connecting to another server. The Remote Server Administration Tools (RSAT) are for Windows Vista and Windows 7 RC. These don't install on Windows 2008.

    Read the article

  • Unable to ping gateway via bridge nic

    - by Ara
    I'm trying to install KVM on Ubuntu 12.04 server. We have multiple nic on this server of which we primarily use eth0. The server network runs fine(i'm able to ping gateway, ping dns server and ping servers on internet) with eth0 /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.22.194 netmask 255.255.255.0 network 192.168.22.0 broadcast 192.168.22.255 gateway 192.168.22.1 dns-nameservers 10.71.130.58 10.71.130.60 dns-search test.local I installed bridge-utils and configured br0 as below /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.168.22.194 netmask 255.255.255.0 network 192.168.22.0 broadcast 192.168.22.255 gateway 192.168.22.1 dns-nameservers 10.71.130.58 10.71.130.60 dns-search test.local bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off Post which i'm able to ping servers on the same ip range 192.168.22.2-254 except for 192.168.22.1 (which is the gateway) also i'm not able to ping any other servers. I'm not able to ping this machine from network. The output for route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.22.1 0.0.0.0 UG 100 0 0 br0 192.168.22.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 I've been struggling with this issue for past 5 days, would be of help if anyone can point me in the right direction to fix this issue. Thanks in advance

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • How to disable mod_security2 rule (false positive) for one domain on centos 5

    - by nicholas.alipaz
    Hi I have mod_security enabled on a centos5 server and one of the rules is keeping a user from posting some text on a form. The text is legitimate but it has the words 'create' and an html <table> tag later in it so it is causing a false positive. The error I am receiving is below: [Sun Apr 25 20:36:53 2010] [error] [client 76.171.171.xxx] ModSecurity: Access denied with code 500 (phase 2). Pattern match "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" at ARGS:body. [file "/usr/local/apache/conf/modsec2.user.conf"] [line "352"] [id "300015"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname "www.mysite.com"] [uri "/node/181/edit"] [unique_id "@TaVDEWnlusAABQv9@oAAAAD"] and here is /usr/local/apache/conf/modsec2.user.conf (line 352) #Generic SQL sigs SecRule ARGS "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" "id:1,rev:1,severity:2,msg:'Generic SQL injection protection'" The questions I have are: What should I do to "whitelist" or allow this rule to get through? What file do I create and where? How should I alter this rule? Can I set it to only be allowed for the one domain, since it is the only one having the issue on this dedicated server or is there a better way to exclude table tags perhaps? Thanks guys

    Read the article

  • LDAP for privilege control?

    - by neoice
    I've been wondering for a while if LDAP can be used to control user privileges. For example, if I have UNIX and web logins, is there an easy way to grant a user access to just or just UNIX (or even both?) My current attempt at solving this very problem was to create 'login' and 'nologin' groups, but this doesn't seem fine-grained enough to meet the ideas I have in my head. I'm also still in the situation where all UNIX users are web users, which isn't a problem so much as an indicator of the limitations. Does anyone have any input on this? Has this problem already been solved?

    Read the article

  • virtualbox instances dedicated-server with custom dnsmasq

    - by ovanes
    I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name. Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as: +-------------------------------------+ | Dedicated Server | | | | +-------------+ +------------+ | +------------------+ | | DNSMasq | | OpenVPN |<==========>| Client | | +-------------+ +------------+ | | | | ^ ^ | +------------------+ | | | | | +--+ | | | | +-------+ | | | | VM1 | | | | +-------+ | | | ... | | | +-------+ | | +-| VM2 | | | +-------+ | +-------------------------------------+ Now some questions which I am struggling with: Are there any other suggestions to access private infrastructure, because I don't want to reinvent the wheel. On the Dedicated Server I don't see the vboxnet0 interface but VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth? DNSMasq must serve the local VMs only, otherwise there is a chance that local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests? How should I config the VM's network interface that I am able to access it via OpenVPN and resolve the hostnames using the DNSMasq. I think it should be the host-only network card. Should I do bridging in the OpenVPN config or is it sufficient to use routing.

    Read the article

  • Restoring permissions on Windows 2008

    - by Andrey
    I have played with folder permissions due to SVN not being able to write to a folder and now I got into a state where I go to any folder of C: drive in Windows Explorer and when I right-click it takes 30 seconds to show the context menu and it just hangs the window after that. It definitely has something to do with permissions as it was all working fine until I started tweaking permissions about an hour ago. My login belongs to two groups Users and Administrators. I changed ownership of C drive to Administrators group and I think it screwed everything, but I can't change it back because I don't even remember what it was :) Oh, and only Administrators group has access to drive C now. Any way to reset permissions to some previous state or some workable state?

    Read the article

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • Cannot connect to remote mail server for sending emails in ASP.NET

    - by Dave
    I want to migrate a web application from a Windows Server 2003 to a Windows Server 2008 R2. All works fine except sending emails from the application. If I configure the application to use the smtp server on "localhost" it works, but changing it to the "real" host name (e.g. mail.example.org) no mail is sent. The error message says, that the remote server needs a secure connection or smtp authentication. But since it works when using "localhost" instead of the host name I doubt that this is the problem. Also it's unlikely a problem with the mail server, I also tried it with another one. So for me it seems like the firewall is blocking the outgoing connection to the mail server. I tried to open port 25, but it still did not work. Maybe I just did it the wrong way. Update: For clarifying my setup: I have a Windows Server 2008 R2 with hMailServer installed (set up for some of the hosted domains) For the website I'm talking about I need to use an external mail server (totally different hosting provider) Apparently I was a bit off the track. It seems like it works when using connecting to the local mail server either with the host name "localhost" or "mail.somedomain.com" (while somedomain.com is set up in my mail server). But when using the host name of the external mail server ("mail.externaldomain.com") it seems like it tries to connect to the local server again, although this domain is not set up in the mail server. Thanks to Evan Anderson for the tip to use telnet - why I have not thought of it myself?... :-) Note, the website www.externaldomain.com is hosted on my server but the DNS entries are maintained by the other hosting provider. "externaldomain.com" is the only entry which points to my server all other records (MX, subdomains) are pointing to the other server. So I think the question is now, how do i bring my server to connect to the external mailserver. Do I have to configure this in my mail server or is it a windows server thing?

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • VPN messes up DNS resolution

    - by user124114
    After connecting with the Kerio VPN client (OS X Leopard) to a server, the internet (~web browsing) stopped working for the client. After poking around, the issue seems to be bad DNS server (i.e., entering IPs directly works). After disconnecting from the VPN, the invalid DNS server disappears from scutil --dns and all's well again. Now, I don't understand why OS X on the client even changes the DNS settings -- internet should be routed through a different interface, through the default gateway, not through the VPN. Questions: By what mechanism does connecting the VPN client change the "default" DNS server? How can I stop the VPN client from changing routing/DNS rules? Where is this stuff stored/modified? Before VPN: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 10.66.77.1 # <---- default gateway = home router; all good order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... VPN connected: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 192.168.1.1 # <--- rubbish nameserver[1] : 192.168.2.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... The VPN doesn't appear among $ networksetup -listallnetworkservices.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >