Search Results

Search found 18341 results on 734 pages for 'neural network'.

Page 646/734 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • master-slave datastore replication, automatic failover, and wackamole

    - by z8000
    I have 2 dedicated servers provisioned for my next project's datastores. The datastores are configured for master-slave replication. There's no inherent automatic failover but I of course want this. That is, I'd love for access to the master datastore to always just work without having to configure a client library to detect when a master is down and failover to the slave. I've seen Wackamole which is based on the Spread Toolkit. You provide Wackamole with a set of IPs and a bunch of nodes, and regardless of the up/down state of any of the nodes, those IPs will stay available/up. Wackamole detects when a node goes down and ARPs the IP(s) that were up on the now-down node. It's pretty neat actually. So, my thought was to use Wackamole to keep the 2 virtual private IPs available/up. Clients would then just always use the same private IP to access the master datastore and the same but distinct IP for the slave datastore, even if those IPs were hosted on the same node. My datastore servers are accessed over a private network. I am unsure if this messes with Wackamole though. Is this lunacy? How do you generally handle automatic failover of private services like a datastore. FWIW, it shouldn't matter but the datastore is Redis. I don't want to hear "use mySQL" please :) Thanks.

    Read the article

  • Wireless does not work on Ubuntu 9.04

    - by Yongwei Xing
    Hi all I install the Ubuntu 9.04 my old Lenovo Y520 laptop, the wirless does not work.My Wireless card is Intel Pro/wireless 2100 card. But I can not enable it. My wired card is working well. Does anyone meet it before. the ifconfig output is eth0 Link encap:Ethernet HWaddr 00:0a:e4:5f:6c:30 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:973 errors:0 dropped:0 overruns:0 frame:0 TX packets:1025 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:574701 (574.7 KB) TX bytes:169249 (169.2 KB) Interrupt:10 eth1 Link encap:Ethernet HWaddr 00:0c:f1:58:79:b5 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:10 Base address:0x8000 Memory:d0202000-d0202fff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:480 (480.0 B) the output of iwconfig is eth1 unassociated ESSID:off/any Nickname:"ipw2100" Mode:Managed Channel=0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:off Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I have another question. When my OS is 9.04, there is a icon about network connection on the panel at the top. After I upgraded to 9.10, that icon disappeared. How can I get that back? Best Regareds,

    Read the article

  • Openbsd init script for ssh VPN tunnel

    - by manthis
    I have a server hosting SSH tunnels and Openbsd 4.5 clients connecting to it. Things work just fine but I am in the need of automating the connection from the client to the server. So that if the client is accidentally rebooted, then the connection initiates unattended. So it should be as straight forward as to include the ssh connection in an init script. However I have miserably failed to do so by including it to /etc/rc.local, which is the file I usually do this sort of things in. Right now I am using autossh to also restart the connection if necessary and the script that I put on /etc/rc.local follows: #!/bin/sh # # Example script to start up tunnel with autossh. # # This script will tunnel 2200 from the remote host # to 22 on the local host. On remote host do: # ssh -p 2200 localhost # # $Id: autossh.host,v 1.6 2004/01/24 05:53:09 harding Exp $ # ID=root HOST=example.com #AUTOSSH_POLL=600 #AUTOSSH_PORT=20000 #AUTOSSH_GATETIME=30 #AUTOSSH_LOGFILE=$HOST.log #AUTOSSH_DEBUG=yes #AUTOSSH_PATH=/usr/local/bin/ssh export AUTOSSH_POLL AUTOSSH_LOGFILE AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -f -M 20000 ${ID}@${HOST} The script detaches just fine when run manually so I just include it on /etc/rc.local as echo -n 'starting local daemons:' if [ -x /usr/local/sbin/autossh.sh ]; then echo -n 'ssh tunnel' /usr/local/sbin/autossh.sh fi echo '.' I have also tried calling it from /etc/hostname.tun0 in case there may be issues with /etc/rc.local not being called at the right time when network connections are ready, so I would use: inet 10.254.254.2 255.255.255.252 10.254.254.1 !/usr/local/sbin/autossh.sh Your input is highly appreciated.

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • IIS WebServer CreatesNew file: OwnerShip?

    - by Beaud.
    IIS is configured for Integrated Windows Authentication. web.config is configured as follows: <authentication mode="Windows" /> <identity impersonate="true" /> We are Load balancing between \webserver1 and \webserver2. Windows Server 2003 \\webserverX creates a XML file to \\share1 and access is denied. We got pass through access denial by allowing Everyon to access the share... We would like to have the impersonated user to be the owner of the created file. Instead, \\webserver1's computer account is the owner. How can we make sure that the impersonated user has ownership of the file at creation time? PROGRESSION: I decided to create the file locally on \\webserver1's root directory. File's ownership is NETWORK SERVICES even if impersonate="true". I'm unable to change ownership of the file in C# code. Why when creating a file, IIS won't use the impersonated user's write permissions? If it actually does, what I am doing wrong?

    Read the article

  • PTR and A record must match?

    - by somecallmemike
    RFC 1912 Section 2.1 states the following: Make sure your PTR and A records match. For every IP address, there should be a matching PTR record in the in-addr.arpa domain. If a host is multi-homed, (more than one IP address) make sure that all IP addresses have a corresponding PTR record (not just the first one). Failure to have matching PTR and A records can cause loss of Internet services similar to not being registered in the DNS at all. Also, PTR records must point back to a valid A record, not a alias defined by a CNAME. It is highly recommended that you use some software which automates this checking, or generate your DNS data from a database which automatically creates consistent data. This does not make any sense to me, should an ISP keep matching A records for every PTR record? It seems to me that it's only important if the IP address that the PTR record describes is hosting a service that is sensitive to DNS being mismatched (such as email hosting). In that case the forward zone would be configured under a domain name (examples follow the format 'zone - record'): domain.tld -> mail IN A 1.2.3.4 And the PTR record would be configured to match: 3.2.1.in-addr.arpa -> 4 IN PTR mail.domain.tld. Would there be any reason for the ISP to host a forward lookup for an IP address on their network like this?: ispdomain.tld -> broadband-ip-1 IN A 1.2.3.4

    Read the article

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

  • More than 10k connections on linux vps

    - by Sash_007
    my question what is causing this and how to check? we use url masking script is the website..is it causing this?please help We could noticed that you are abusing our network, as you have made more than 10k connections in our node due to this our node became unstable and all of our customer faced down time because of your VPS. Please find the log details below for your reference. ============================== 593 src=199.231.227.56 dst=58.2.236.196 465 src=199.231.227.56 dst=192.223.243.6 396 src=199.231.227.56 dst=58.2.238.191 217 src=199.231.227.56 dst=58.2.236.197 161 src=199.231.227.56 dst=20.139.83.50 145 src=199.231.227.56 dst=192.223.163.6 136 src=199.231.227.56 dst=125.21.230.68 134 src=199.231.227.56 dst=125.21.230.132 131 src=199.231.227.56 dst=20.139.67.50 117 src=199.231.227.56 dst=110.234.29.210 112 src=199.231.227.56 dst=65.52.0.51 104 src=199.231.227.56 dst=202.46.23.55 100 src=199.231.227.56 dst=202.3.120.4 94 src=199.231.227.56 dst=117.198.39.22 69 src=203.197.253.62 dst=199.231.227.56 62 src=14.194.248.225 dst=199.231.227.56 53 src=199.231.227.56 dst=192.223.136.5 52 src=49.248.11.195 dst=199.231.227.56 51 src=199.231.227.56 dst=117.198.38.15 50 src=199.231.227.56 dst=192.71.175.2 47 src=199.231.227.56 dst=61.16.189.76 45 src=199.231.227.56 dst=122.177.222.17 43 src=199.231.227.56 dst=115.242.89.40 42 src=199.231.227.56 dst=103.22.237.215 41 src=125.16.9.2 dst=199.231.227.56 39 src=199.231.227.56 dst=117.198.35.90 38 src=199.231.227.56 dst=203.91.201.54 38 src=199.231.227.56 dst=14.139.241.89 38 src=199.231.227.56 dst=111.93.85.82 37 src=199.231.227.56 dst=65.52.0.56 Note: 1st column indicates the total number of connections to a particular IP. You have totally made more than 10k connections.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Joining new DC to AD - DNS name does not exist

    - by Andrew Connell
    I had a DC fail on me recently and trying to add a new one to my domain, although I'm sensing I might have other issues in my domain. I'm a dev at heart and know just enough about AD to be dangerous so looking for some assistance. My working DC is RIVERCITY-DC12. I'm trying to promote RIVERCITY-DC14 as a DC to the RIVERCITY domain, but when I run DCPROMO, at the NETWORK CREDENTIALS step where I point to the name of the domain (rivercity.local), I get "An AD DC for the domain rivercity.local cannot be contacted" and in the details see "The error was DNS name does not exist" Looking at RIVERCITY-DC12, I can see DNS is working, I've been able to query it from other machines in my domain, and no errors are reported in the DNS category within the Event Viewer. When I checked the FMSO roles, it shows RIVERCITY-DC12 is the machine for all listed roles. Not sure what I should do next or how to troubleshoot/investigate after searching around for a solution... ideas? Environment: Domain: rivercity (rivercity.local) Forest functional level: Windows 2000 (I'm more than happy to raise this) Windows Server 2008 All servers are Windows Server 2008 R2 SP1 (fully patched)

    Read the article

  • Is TortoiseSVN really this buggy?

    - by John Isaacks
    I have been using tortoise svn for a couple weeks now. I get errors very often. Almost everything I do creates an error. this is with repositories on the internet, locally on my machine or a machine on the network. So I started to keep track. Some examples are below. 12/31/2010 Can't move 'C:\Users\jisaacks\Desktop\my branch test.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\my branch test.svn\entries': The file or directory is corrupted and unreadable. 01/04/2011 Commit failed (details follow): Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '/svn/kranichs-svn/!svn/wrk/b316f15e-0869-4644-9c53-87aa0103506b/branches' 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Commit failed (details follow): attempt to write a readonly database attempt to write a readonly database That last one about the read only database happens every time I commit. Say if I am working on the head revision (7) in a working copy. I make a change and commit it. It gives me this error. But if I look at the log it tells me that there is now a revision 8 (the commit I just made) but I am still on revision 7. So I need to run update to be on the current revision that I just commited. I hope I explained that clearly. Anyways with all these errors I wonder.. Is TSVN just this unstable, does everyone have these issues. Or is it just me? If just me, what could I be doing wrong?

    Read the article

  • Netgear router keeps disconnecting iPhone

    - by DisgruntledGoat
    My old router (Voyager 2091) packed up so I just got a new router - a Netgear N150 model DGN1000. My laptop connects OK wirelessly, but my iPhone 4S is constantly getting "disconnected" - it has perfect wifi signal and is seemingly connected to the router, but no pages load (it says "server cannot be found"). If I disconnect manually ("forget this network") then reconnect, it works fine again for a random amount of time (usually 10-30 minutes) then I get the same problem again. I've done some searching and this appears to be a known problem - there are dozens of forum posts out there lamenting similar connection problems. The only advice I have seen is to set a specific channel under Wireless Settings on the router CP, although every forum post recommends a different channel! 1, 3, 5, 6, 11... I have tried them all for hours at a time and get the exact same problem. The firmware is up to date. Is there an actual solution for this, or do I need to get a different router just to be able to use my iPhone?

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • DNS to \\Server\ wrong - \\Server.company.local\ works fine

    - by JimmyClif
    I had a little network glitch and since then one of my servers shows up wrong at some workstations when typing in \\server\. Example: On workstationA I go to Explorer and and type \\server\ and it brings me to our copier at 192.168.2.101. \\server.company.local\ gets me to the right place at 192.168.2.252. Ping with server pings 192.168.2.252 - same correct result with ping server.company.com nslookup also shows correct result with both. reverse lookup by ip is correct also. I flush the DNS on the workstation and the error still occurs. reboot same result. At that point I give up and start remapping the shares to \\server.company.local\share just to get the user back working... DNS Server has correct entries for that server. Can access the server via \\server\ on dns server, all looks fine. Eventually the workstation figures it out by itself and \\server\ works again but my life wouldn't be as stressful if I had a clue what happened or how to fix it myself. Thanks for your time looking and answering.

    Read the article

  • Virtualbox port forwarding with iptables

    - by jverdeyen
    I'm using a virtualmachine (virtualbox) as mailserver. The host is an Ubuntu 12.04 and the guest is an Ubuntu 10.04 system. At first I forwarded port 25 to 2550 on the host and added a port forward rule in VirtualBox from 2550 to 25 on the guest. This works for all ports needed for the mailserver. The guest has a host only connection and a NAT (with the port-forwarding). My mailserver was receiving and sending mail properly. But all connections are comming from the virtualbox internal ip, so every host connection is allowed, and that's not what I want. So.. I'm trying to skip the VirtualBox forwarding part and just forward port 25 to my host only ip of the guest system. I used these rules: iptables -F iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -A INPUT --protocol tcp --dport 25 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -s 192.168.99.0/24 -i vboxnet0 -j ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp -i eth0 -d xxx.host.ip.xxx --dport 25 -j DNAT --to 192.168.99.105:25 iptables -A FORWARD -s 192.168.99.0/24 -i vboxnet0 -p tcp --dport 25 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.99.0 -o eth0 -j MASQUERADE iptables -L -n But after these changes I still can't connect with a simple telnet. (Which was possible with my first solution). The guest machine doesn't have any firewall. I only have one network interface on the host (eth0) and a host interface (vboxnet0). Any suggestions? Or should I go back to my old solution (which I don't really like). Edit: bridge mode isn't an option, I have only on IP available for the moment. Thanks!

    Read the article

  • httpd.config Easy Apache WHM CentOS

    - by jessie
    First let me explain how I got to this situation. I run a Streaming Video Site. Videos are about 100-250MB in size at any given time there are 500 people on the site. So I guess that would make then static. Recently My site started getting really slow and the only way to fix it temporarily was to restart apache. Now there was no change in traffic that could have caused this. My site is not being attacked. My hosting company recomended to implement mpm_mod and suPHP. They did that by using Easy Apache in WHMS. Then everything was working fine but a little slow. I researched around and to my understanding that mpm will do that but be more table. I was told that installing FastCGI would speed things up just enough. Well that made everything worse. The site is slow and time's out. I used WHM and took off fastCGI but its still the same, it seems like everything i do as of now nothing changes. I even did a roll back on the htconfig file but that didn't work. I'm not sure how to fix this. and my hosting network guy wont be able to touch the problem until Tuesday. I have root access.

    Read the article

  • WiFi & GbE Slow while Both Active.

    - by Mark Tomlin
    I'm having a problem with my WiFi network connection when I use my wired GbE connection concurrently on my Laptop. I'm using my WiFi for Internet access, and general web surfing and I'm using my GbE connection to connect to my PlayStation so I can stream media. The WiFi connection is via a Linksys 610N connected to my Cable Modem. Where as the GbE connection is a direct connection from my Ethernet port to the Ethernet port of the PS3 via a Cat-5 cable (no router in between this connection). As soon as I connect the cable from the PS3 to my Ethernet port on my Laptop the WiFi connection slows to a halt, but then allows for a connection to the web as normal but at much slower speeds for the things like BitTorrent that stops completely. It seems to me that Windows can't handle both connections at once. It will have both active but it can only accept and send packets on one device at one time. I can get WiFi connections to work to go to websites and the like, but once I use my GbE connection to share media between my Laptop and my PS3 the Wifi connection dies out and I no longer have access to the internet. I setup my connection on the PS3 and the Laptop following the insturctions posted here: http://forums.finalgear.com/problems/s14e01-ps3-size-problem-40642/#post1188132 And the following is the results of my ipconfig. Windows IP Configuration Host Name . . . . . . . . . . . . : dygear Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter WiFi: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/Wireless 3945ABG Network Connection Physical Address. . . . . . . . . : 00-19-**-**-**-** Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.1.111 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 167.206.254.2 167.206.254.1 Lease Obtained. . . . . . . . . . : Wednesday, May 19, 2010 08:55:30 Lease Expires . . . . . . . . . . : Thursday, May 20, 2010 08:55:30 Ethernet adapter LAN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet Physical Address. . . . . . . . . : 00-16-**-**-**-** Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.1.50 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Any ideas?

    Read the article

  • There are currently no logon servers available

    - by Ian Robinson
    I am running a Windows 7 laptop that is joined to my company's domain. When I installed Windows 7, I created an account for myself, joined to the domain, and it had been working quite well even though I'm physically remote most of the time, and not actually on the network. However, today I created a new local user account (non-admin) for my little brother. While he was using it, he decided he wanted to install a program, because his account is not an admin, he was prompted to enter Administrator credentials to allow the program to make changes to his computer. I entered my credentials, and this is the first time I ran into the error message: There are currently no logon servers available to service the logon request. I tried logging off and loggin back in, rebooting, etc etc, and no matter what, every time I try to authenticate as my "normal" domain account - I get that message. I can no longer access my computer as an administrator. I no longer know how to log in to my machine using any other account aside from my little brother's non-admin account. I don't have any other local accounts created, and the default local admin account was never enabled. I'd appreciate any ideas on how I can recover access to my account. Let me know if I can provide any more information. FYI - This is a similar question but not sure any of the answers help me in my case. http://serverfault.com/questions/71632/there-are-currently-no-logon-servers-available-to-service-the-logon-request

    Read the article

  • Port 5357 TCP on Windows 7 professional 64 bit?

    - by Registered
    Is there a reason this port is open, a quick Nmap scan and Nessus scan reveal it's open, why? Are there any ramifications if I close this port via the firewall rule set? Or does anyone here now more info about this port besides Google? WTF? 1)http://www.symantec.com/connect/blogs/who-left-tunnel-door-open-windows-firewall-vista-0 I know the talk is about Vista, but I am pretty sure it's the same port on 7, also. 2)Port 5357 common errors:The port is vulnerable to info leak problems allowing it to be accessed remotely by malicious authors. (Web Services for Devices) I am blocking this crap, if I have issues will just re-enable. Damn windows. Inbound rule for Network Discovery to allow WSDAPI Events via Function Discovery. [TCP 5357] You just got blocked, until I break something, will see. Time to re-Nmap and re-Nessus. Nmap scan 0 open ports after closing Port 5357,Win7 still works for now, one more scan with Nessus just to make sure all is well.

    Read the article

  • Netgear FVS336G: appropriate solution for today's small businesses?

    - by bwerks
    Hey all, I've been looking into a routers to facilitate a vpn solution for a small business. While the Netgear FVS336G looks good on paper, it appears to have some fairly crippling setbacks that drag down what appears to be some great hardware. First off, the unit has been around for a couple years now, perhaps before 64-bit operating systems were as common as they are now, and complaints are everywhere that claim that SSL or IPsec (or both) VPN connections will not work with 64-bit operating systems. However, most of these claims mention only Vista, which makes me think that these problems could have potentially been solved since then. Unfortunately though, Netgear's support forums seem to be incredibly private, and policed by some troll named jmizuguchi who just closes down public posts in order to marshal them into the private ones. Danger, will robinson. Apparently their firmware upgrade process is a nightmare too, but that's beside the point. My question is this: has anyone configured one a Netgear FVS336G to operate in a server 2008 (or R2)/windows 7 64-bit network? If so, is it possible to use the microsoft vpn client or are third party clients still required? If this thing has just failed the test of time, is there a feature-comparable unit that I've missed, at anywhere near the same price range? Thanks!

    Read the article

  • How to create domain or router-level workgroup (dd-wrt micro)

    - by Anthony
    In Windows, is active directory required for using "Domain" instead of "workgroup"? Do I need to register a domain with a DNS provider like godaddy? What I really want to do is set up my home LAN so that everyone connecting to the main router (which is everyone, which is about 30 people) can see each other. I've tried having everyone use the same work group name, still hit or miss. I tried setting the domain name and host name on the router itself, still nothing. I've tried joining the domain name I set instead of work group, and I get an AD error. But ideally, everyone who is connected to the main router should simply just see each other and any shared folders. I've had this problem when I was not the network admin on other large LANs, and I've never been able to figure out why sometimes people disappear or never see each other. I'd really prefer using the native sharing functionality in the OS to setting up an internal FTP or Samba server, etc. Any sure-fire ways to fix this? (maybe an open source clone of AD?) Thanks!

    Read the article

  • SSH session becomes unresponsive when logged into Ubuntu Server virtual machine using VirtualBox

    - by nickbart
    Hi everyone, I'm really at my wits end here, so I'm hoping someone here can help me. I have a virtual machine running Ubuntu Server 9.10. It's just a small development environment so I can keep my code separate from the test and production environments. I am running it through VirtualBox 3.1.6 on a laptop running Ubuntu Desktop 9.10. I have it set up with a bridged network connection and it is bridged to my laptop's wireless adapter. We have no wired connections in this office. I boot up the VM and everything is fine. I can SSH into it using gnome-terminal and for a while everything is Kosher. Then seemingly randomly, the SSH terminal session with hang. No error message, nothing; it just becomes unresponsive. If I go to the VirtualBox terminal I find the VM itself is perfectly fine. It can ping and I can SSH out with it. If I restart the networking on the VM the SSH session in my gnome-terminal will most of the time become responsive again. Here's an interesting point, the SSH session will sometimes die right in the middle of me typing something (this points to it not being an idle session issue) and if I go to the VirtualBox terminal and restart the networking and then return to my gnome-terminal SSH session I find that it will come back to life and what I typed when the session hung originally will magically type itself in to the buffer. So, my input is getting stored somewhere and just can't make its way to the VM until the networking on the VM is restarted. I've tried different versions of VirtualBox and used vmdk images and vdi images and nothing seems to work. I can't tell if the problem is with my laptop, VirtualBox, or the Ubuntu Server VDI. Is there anyway to debug this issue? Or has anyone out there seen anything similar? Your help is much appreciated. Nick

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >