Search Results

Search found 18665 results on 747 pages for 'inside red gate'.

Page 237/747 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • 553-Message filtered - HELO Name issue?

    - by g18c
    I am having major issues sending from my SBS2011 machine to Message labs server-13.tower-134.messagelabs.com #553-Message filtered. Refer to the Troubleshooting page at 553-http://www.symanteccloud.com/troubleshooting for more 553 information. (#5.7.1) ## I have changed the IP and hostnames from the below. I am not on any IP or domain blacklists. I have setup SPF (which includes mailchimp servers): v=spf1 mx a ip4:95.74.157.22/32 a:remote.mydomain.com include:servers.mcsv.net ~all I am sure i have setup my HELO names correctly under the Exchange Management console, sending a test email from the SBS server and looking at the header shows the following: X-Orig-To: [email protected] X-Originating-Ip: [95.74.157.22] Received: from [95.74.157.22] ([95.74.157.22:52194] helo=remote.mydomain.com) by smtp50.gate.ord1a.rsapps.net (envelope-from <[email protected]>) (ecelerity 2.2.3.49 r(42060/42061)) with ESMTP id 11/90-10010-E529C835; Mon, 02 Jun 2014 11:04:09 -0400 Received: from MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef]) by MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef%10]) with mapi id 14.01.0438.000; Mon, 2 Jun 2014 19:03:56 +0400 Is is the main helo name there OK and do i need to worry about the second Received block where the MYSBSVR.mydomain.local is mentioned? I have asked the ISP to set the reverse DNS for my IP to remote.mydomain.com but they have instead put remote.MYDOMAIN.com - would this case cause HELO lookups to classify this as not matching? Anything else I can do to find out why i am being filtered?

    Read the article

  • LogMeIn style remote access to NAS drive

    - by Mere Development
    I've been asked to setup some remote access to a NAS drive. The NAS drive will sit on a VLAN inside a network that uses a Cisco 891 IS router as gateway. The charity have no SSL-VPN licenses for the Cisco. At present there are no open ports or services on the Cisco itself and ideally we would like to keep it that way for a while, hence the request for a LogMeIn style service that's initiated from inside. We need multiple user access, about 10 max. Using LogMeIn on a machine connected to the NAS would only provide screen sharing I believe, and no concurrent connections (could be wrong?) The end users need to be able to read and write files to the NAS from Mac's and PC's around the globe. Read-only access from Mobile devices would be a bonus but not absolutely necessary. This is for a charity, non-commercial, but they are willing to spend if necessary. Cisco config knowledge is at a minimum so if I can avoid upsetting that delicate device I'll be happy :) Anyone have any clever ideas? I can provide more information on request. Thanks, Ben

    Read the article

  • dpkg broken while upgrading Debian Etch to Lenny

    - by artvolk
    Good day! While trying to recover a box to lenny it seems I've broken things. It upgrades libc and glib after that dpkg seems to be broken. I can run apt-get, but it gets segmentation fault from dpkg: # apt-get -f install Reading package lists... Done Building dependency tree... Done 0 upgraded, 0 newly installed, 0 to remove and 316 not upgraded. 9 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. /bin/sh: line 1: 4606 Segmentation fault /usr/sbin/dpkg-preconfigure --apt E: Sub-process /usr/bin/dpkg received a segmentation fault. I can login via SSH but even ls is not working: # ls Segmentation fault Is there anything I can do remotelly via SSH? # ldd /bin/ls linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0xb7fc8000) libacl.so.1 => /lib/libacl.so.1 (0xb7fc2000) libselinux.so.1 => /lib/libselinux.so.1 (0xb7fac000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7e51000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7e3f000) /lib/ld-linux.so.2 (0xb7fd8000) libattr.so.1 => /lib/libattr.so.1 (0xb7e3b000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7e37000) libsepol.so.1 => /lib/libsepol.so.1 (0xb7df6000) It seems I've temporary fixed it with: # touch /etc/ld.so.nohwcap From here: http://saintaardvarkthecarpeted.com/blog/archive/2005/08/_etc_ld_so_nohwcap.html

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • Windows 7 search does not return results from indexed folders

    - by Dilbert
    I am experiencing this issue over and over again and I just cannot seem to find the answer. It doesn't make sense, but search simply does not return results from folders that certainly have these files inside. It's weird that this technology exists for more than 5 years now (it could be added to Windows XP as an addon), and they still haven't got it right. My folder contains 10 image files with .png extensions. Two scenarios: Scenario 1: I exclude the folder using Indexing options. Search works. Scenario 2: I turn on indexing for this folder. Search does not work. Of course, Agent Ransack returns results every time. When I check Advanced options for the Indexing options inside control panel, .png files are checked in the File Types tab, using the "File Properties filter". What's the deal with this? [Edit] To clarify, this doesn't happen with all folders, but does with more than one. For the "problematic" folders, even *.* doesn't return a single result. I found some advice to clear the archive and readonly attributes for all files (doesn't make sense, but hey), but it didn't work. Indexing status in Control panel is: Indexing complete. 100,000 items indexed. Folder is included in the list. File types list contains the .png extension (although it doesn't work with any filter, not even *.*).

    Read the article

  • Router not connecting to the internet

    - by Peter
    I had a weird problem this morning with my router - it's an Alice Gate VoIP 2 Plus WI-FI - not connecting to the internet after more or less 1 hour online (I'm always connected with the ethernet cable). The led status lights for Power, WI-FI, ADSL, Internet, Service should be on in order to be connected and navigate online. The problem was that the leds ADSL, Internet were off and I did not know why because it never happened before. I looked at the stats in the settings and the numbers for Bytes/Packages for both Sent/Received were there and increasing but I couldn't connect to the internet. I called tech support, they checked and told me to keep the router on for 48 hours because they were checking it. I've reset it twice before and after I called tech support and it still did not work so after about 2 hours of waiting I tried connecting using WI-FI and the leds 'magically' turned on, first the ADSL and Internet(Internet led always turns on last). At this point I'm curious what could of caused this and I'm doubting that the tech-support guys did something. What could of been the problem with the ethernet cable not connecting in the first place? It always works. What do the tech support guys normally do when they tell me to keep the router on so they can 'check it'? PS: I'm using ubuntu 32 bit

    Read the article

  • How is made sure magnetic or electric fields from devices like transformers or fans close nearby do

    - by matnagel
    Fans and transformers which are inside the server case create magnetic and electric fields. Electric fields can be easily shielded, but what about magnetic fields, they can only be shielded with high cost materials like mu metal http://en.wikipedia.org/wiki/Mu-metal If a hard drive is installed too close to an intense transformer field, how is the magnetically stored information on the ferromagnetic surfaces of the disk kept safe? Even if drives are shielded, where are the limits? Is there some technical investigation or recommendation from manufacturers about this? (I never heard about something and never had any problem but I am interested in some facts. This is much preferred over what you believe or a habit you developed. Please try to give some solid infromation.) I have built and repaired many servers and sometimes I did put the harddrive on top of the power supply. Edit: This question is not about frequencies that could affect the drive via the power or data connectors of the drive, those are electronically decoupled and that's another question. Edit 2: The wikipedia page states that the motor inside the drive is shielded with mu metal. It is obvious that manufactureres have to take care of this. This question is about such influences from outside the drive.

    Read the article

  • How can I preview the contents of books sold on Amazon in Internet Explorer 10?

    - by Aceman
    I run 64-bit Windows 7 and have both Internet Explorer 10 and a newly-downloaded copy of Google Chrome. I downloaded the latter when I found that I could not look (preview) the contents of books sold on Amazon (using it "Look Inside" feature) and wanted to try to get a handle on how extensive the problem was (i.e., whether it occurred on other browsers, too). It turns out that this isn't a problem with Chrome. However, I have grown accustomed over the years to IE and would prefer to resolve this problem instead. Here's an example of my attempt to 'Search Inside' a book on sale at Amazon: http://www.amazon.com/Intellectuals-Race-Thomas-Sowell/dp/0465058728/ref=sr_1_1?s=books&ie=UTF8&qid=1382301764&sr=1-1&keywords=thomas+sowell#reader_0465058728 I understand that I can't (as is the case in some technical support websites) add a screen shot (ie. JPG to this), but if I were to describe what this looks like in IE, the area where the text would appear appears as a blank white page with horizontal lines where the text would ordinarily be. I have just examined this posting of mine via Chrome and the contents of the selected page are clearly visible.

    Read the article

  • How do you get AWS VPC EC2 instances to be able to see the AWS APIs?

    - by Peter Mounce
    We're spinning up infrastructure inside of an AWS VPC via CloudFormation. We're using auto-scaling groups to bring up VPC-EC2 instances (so, we don't bring up instances directly; ASGs manage that). Inside of a PVC, EC2 instances only have a private IP; they cannot see the outside world without further work. When these instances spin up, we have some bootstrap tasks that require talking to the various AWS APIs. We also have some ongoing tasks that require AWS API traffic. How are you tackling this apparent chicken-egg problem? We've read about: NAT instances - but don't like this so much because it's another layer to our stack. assigning elastic-IPs to each VPC instance that needs to talk - but a) they all do, and b) since we're using ASGs, we don't know which instances to assign EIPs to at provision-time, and c) we'd need to set up something to monitor those ASGs and assign EIPs when instances are terminated and replaced spinning up an instance (actually, a load-balanced pair, probably spanning AZs) to act as an AWS-API proxy for all API traffic I guess I'm wondering whether there's some kind of back-door we can open that allows our VPC EC2 instances access to the AWS API endpoints, but nothing else, for cheap-complexity setup, that doesn't add another network-hop layer to our infrastructure for serving requests.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • LXC, Port forwarding and iptables

    - by Roberto Aloi
    I have a LXC container (10.0.3.2) running on a host. A service is running inside the container on port 7000. From the host (10.0.3.1, lxcbr0), I can reach the service: $ telnet 10.0.3.2 7000 Trying 10.0.3.2... Connected to 10.0.3.2. Escape character is '^]'. I'd love to make the service running inside the container accessible to the outer world. Therefore, I want to forward port 7002 on the host to port 7000 on the container: iptables -t nat -A PREROUTING -p tcp --dport 7002 -j DNAT --to 10.0.3.2:7000 Which results in (iptables -t nat -L): DNAT tcp -- anywhere anywhere tcp dpt:afs3-prserver to:10.0.3.2:7000 Still, I cannot access the service from the host using the forwarded port: $ telnet 10.0.3.1 7002 Trying 10.0.3.1... telnet: Unable to connect to remote host: Connection refused I feel like I'm missing something stupid here. What things should I check? What's a good strategy to debug these situations? For completeness, here is how iptables are set on the host: iptables -F iptables -F -t nat iptables -F -t mangle iptables -X iptables -P INPUT DROP iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -A POSTROUTING -o lxcbr0 -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 7002 -j DNAT --to 10.0.3.2:7000

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • Symbolic link modification for HP unix

    - by kalpesh
    Hi David Zaslavsky, Recently i was working on modifying the Symbolic links ... for a particular files.. while searching on internet i saw your post ... I am trying to use this script which you had posted .. find /home/user/public_html/qa/ -type l \ -lname '/home/user/public_html/dev/*' -printf \ 'ln -nsf $(readlink %p|sed s/dev/qa/) $(echo %p|sed s/dev/qa/)\n'\ script.sh SO i tried to modify your script for my problem .. in Hp unix env.. but it seems that the -lname command does not works for HP unix. do you know something equivalent that i can use ... Just to give you and idea of my problem ... I want to change all the symbolic links inside a particular folder .. New Symbolic link -- /base/testusr/scripts Old Symbolic link -- /base/produsr/scripts Now folder "A" contains more than 100 different files having soft links which points to this path -- /base/produsr/scripts But what I want is that the files inside folder A to point to this soft link --/base/testusr/scripts I am trying to achieve in Hp unix ... would really appreciate your help on this ...

    Read the article

  • WAMP Installation - Multiple PHP Version Issues

    - by Pete171
    I have installed WAMP because I am attempting to modify an an application which uses Zend Optimizer (I cannot use Z.O. with PHP5.3+, which is why I decided to install WAMP). I downloaded the latest version - Wamp Server 2.1 - which comes bundled with PHP5.3.5. I then downloaded a PHP version that would be compatible with Z.O. - 5.2.9. - and a compatible Apache version, 2.0.63. My problem: PHP scripts run fine, but anything with MySQL does not work. Running the testmysql.php script returns the fatal error: Fatal error: Call to undefined function mysql_connect() in C:\wamp\www\testmysql.php on line 2". I have looked in the PHP INI files inside both PHP versions, and I'm fairly sure the relevant information is there. At least, there are parts inside that mention MySQL! Perhaps somebody could clarify exactly what information should be present? Also, when visiting a page that called phpinfo(), I noticed that the 'Loaded Configuration File' was pointing to C:\wamp\bin\php\php5.3.5\php.ini, even though I have enabled the older PHP version. I've stopped and started Apache, too, and that hasn't made a difference. Is anybody able to offer any assistance? Anything at all would be great; I'm not very good at messing around with Apache/

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • What can I do with a home server?

    - by Joel Coehoorn
    I have an old 700 Mhz Pentium III at home running Windows 2000 Server, with a home router set up to pass incoming requests to it and a DynDNS account set up so it's easy to find. Right now I'm using it for a number of things: Shared folders + backup inside the home network Shared Printer inside the home network Domain Controller, just because I feel like it and because it's useful to me as practice to keep those "enterprise" administration skills. Web Server FTP remote access for my files. I abandoned this for security reasons, but it's still worth leaving visible. Remote Desktop in to the home network (thinking about adding VPN service) SVN repository MySQL - Will be moving to SQL Server 2008 Standard soon. After I upgrade my wife's laptop from home to pro later this year it will also become a domain controller It's the only place I still have access to Internet Explorer 6 any more without setting up a new virtual machine, so I use it for testing code with that browser. The question is: What else could I be doing with this machine? Update Additional ideas based on the suggestions: Media Server/DVR Build server PBX SSH Proxy Server Continuous Integration Server Personal OpenID Provider Update2 Just a note that this server was recently upgraded to an Atom330 with 2 GB ram and bigger hard drive. For all that's slow for a "modern" cpu, it should still be much faster than the old Pentium III and the expected power savings should make the upgrade essentially free over the course of the next year or two. Also, it's now running Windows Server 2008.

    Read the article

  • Ubuntu Server Wireless connection issue - replaced router but kept ESSID

    - by Stevo
    I have a ubuntu server 12.04 which was connected to my wireless network with no problem I replaced the wireless router but kept the ESSID and password the same. All other devices on network have connected correctly. However the Ubuntu Server will not route correctly. It will connect to the wifi router, and get a dhcp served IP address, however it will not route anything. I cannot ping the router from the server. the contents of /etc/resolve.conf are updated with the information from the router, (the host name has been served) I know there is nothing wrong with the router or the server, or the wireless card etc. I'm assuming there's some cached setting that associates the old router with the ESSID and causing the issue. I've got a lot of other devices connected to the router, so don't want to change the name of the ESSID. How do I fix this? EDIT: outputs (abbreviated as I've got no cut and paste) netstat -rn: Kernel IP Routing table Dest Gate Gen Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0. UG 0 0 0 wlan0 192.168.0.0 0.0.0.0. 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • Vim clobbering scrollback buffer outside of screen

    - by dotancohen
    If I'm not in a screen session, then when exiting Vim I get a bash prompt below the remnants of the VIM window. A side effect of this is that my scrollback buffer is clobbered, especially if I have paged through a long file in VIM. The problem only occurs if I'm not in screen, inside a screen window VIM exits to show the bash prompt and the previous lines just as before. I tried adding sett_ti=t_te= to my .vimrc to fix the problem, but the only effect that it has was to break VIM such that the problem occurs inside screen as well as outside. Thus, I removed the line. For good measure I do have altscreen on in .screenrc. This is on Ubuntu Server 12.04.1 LTS, with Bash 4.2.24, Screen 4.00, and VIM 7.3 (not vim-tiny), accessed over SSH in Cygwin version NT-6.1-WOW64 on a Windows 7 laptop. Thanks. EDIT: Note that in the same Cygwin install I can SSH into a different server (CentOS) and there VIM does not clobber the scrollback buffer. Therefore, I do not suspect a Cygwin issue. The CentOS machine does not have screen installed, and I did not have to add set t_ti= t_te= to .vimrc.

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • Cannot write to directory after taking ownership

    - by jeff charles
    I had a directory on an internal hard-drive that was created in an old Windows 7 install. After re-installing my operating system, when I try to create a new directory inside that directory, I get an Access Denied message. This isn't a protected directory, just a random directory I created at the drive root (that drive was not the C drive in either install). I tried to take ownership by opening folder properties, going to the Security tab, clicking on Advanced, going to Owner tab, clicking on Edit, selecting my user account, checking Replace owner on subcontainers and objects, and clicking Apply. There were no error messages and I closed the dialogs. I rebooted, checked the owner on that folder and a couple subfolders and it appears to be set correctly. I am still getting an Access Denied message however when trying to create a directory in it. I've also tried using attrib -R . to remove any possible readonly attribute inside the directory in an admin command prompt but am still unable to create a directory using a non-admin prompt (it does work in an admin prompt). Is there anything I can do to get write access to that folder and it's subcontents in a non-elevated context without disabling UAC?

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • Cisco IOS ACL: Don't permit incoming connections just because they are from port 80

    - by cjavapro
    I am going much based on my memory and I may not be correct on all of this. On a Cisco 851 (IOS) that uses a BVI or a bridge-route (the servers on the inside are configured with static and public IP addresses). I would apply two access lists (both end with deny ip any any log) on FastEthernet4 (the WAN port). There would be one for FA4 in and another for FA4 out. FA4 out would have a line like access-list 110 permit 98.76.54.0 0.0.0.255 gt 1023 any eq http I think this means from 98.76.54.* with a from port of at least 1024 can connect to any other machine with a destination port 80. So, then I have to allow the response to the HTTP connection. FA4 in would have a line like access-list 120 permit any eq http 98.76.54.0 0.0.0.255 gt 1023 Now the problem with that is that anybody on the outside can set their from port to port 80 and then connect to any inside port that is at least 1024. How do we prevent this and require the incoming data to be a response to the outgoing data.

    Read the article

  • VPN service into 192 network

    - by tophersmith116
    I'm thinking about setting up a security testing lab. I work on a switched network, and that just makes for unnecessary headaches when doing testing. I'd like to create a 192 network with a few machines inside for DBs and AppServers etc. I will need a pivot machine that connects to both the outer network and the 192 (for automation purposes). But I'd like to be able to connect into the 192 network with my own machine from the outer network as the "attacking" machine (rather than have dedicated attack machines inside the 192 network). Therefore, I'd like to have the pivot server be a VPN server as well, so that my machine can VPN into the 192 network from the outer network. First off, is this even possible? Can I have a single computer with two NICs where a VPN service allows remote connections into the 192? Secondly, I'd like to have multiple outer clients connect to the VPN. Does anyone have any suggestions? I've used Hamachi well before, but I've also seen some good stuff from OpenVPN.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >