Search Results

Search found 4906 results on 197 pages for 'ssh tunnel'.

Page 150/197 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • PHPUnit and autoloaders: Determining whether code is running in test-scope?

    - by pinkgothic
    Premise I know that writing code to act differently when a test is run is hilariously bad practise, but I may've actually come across a scenario in which it may be necessary. Specifically, I'm trying to test a very specific wrapper for HTML Purifier in the Zend framework - a View Helper, to be exact. The HTML Purifier autoloader is necessary because it uses a different logic to the autoloaders we otherwise have. Problem require()-ing the autoloader at the top of my View Helper class, gives me the following in test-scope: HTML Purifier autoloader registrar is not compatible with non-static object methods due to PHP Bug #44144; Please do not use HTMLPurifier.autoload.php (or any file that includes this file); instead, place the code: spl_autoload_register(array('HTMLPurifier_Bootstrap', 'autoload')) after your own autoloaders. Replacing the require() with spl_autoload_register(array('HTMLPurifier_Bootstrap', 'autoload')) as advertised means the test runs fine, but the View Helper dies a terrible death claiming: Zend_Log[3707]: ErrorController caught LogicException "Passed array does not specify an existing static method (class 'HTMLPurifier_Bootstrap' not found)" (Our test folder structure is slightly different to our Zend folder structure by necessity.) Question(s) After tinkering with it, I'm thinking I'll need to pick an autoloader-loading depending on whether things are in the test scope or not. Do I have another option to include HTMLPurifier's autoloading routine in both cases that I'm not seeing due to tunnel vision? If not, do I have to find a means to differentiate between test-environment and production-environment this with my own code (e.g. APPLICATION_ENV) - or does PHPUnit support this godawful hackery of mine natively by setting a constant that I could check whether its been defined(), or similar shenanigans? (My Google-fu here is weak! I'm probably just doing it wrong.)

    Read the article

  • failed to enable x11 forwarding

    - by Hunt
    I am trying to enable X11 forwarding on my server which is running on FreeBSD 7.1. I have a putty installed in my windows in which i have enabled X11 forwarding by checking on Enable X11 forwarding and specifying following parameter X display location localhost:0 after that i run putty and checked whether X11 is enabled or not by typing following command echo "$DISPLAY" or echo $DISPLAY but i am getting following error DISPLAY: Undefined variable. Even i have installed XManager but in that also i am getting following error The X11 forwarding request was rejected ! To solve this problem, please turn on the X11 forwarding features of the remote SSH server can anyone suggest me how to get rid off this ?

    Read the article

  • firehol (firewall) with bridge: how to filter

    - by Leon
    I have two interfaces: eth0 (public address) and lxcbr0 with 10.0.3.1. I have a LXC guest running with ip 10.0.3.10 This is my firehol config: version 5 trusted_ips=`/usr/local/bin/strip_comments /etc/firehol/trusted_ips` trusted_servers=`/usr/local/bin/strip_comments /etc/firehol/trusted_servers` blacklist full `/usr/local/bin/strip_comments /etc/firehol/blacklist` interface lxcbr0 virtual policy return server "dhcp dns" accept router virtual2internet inface lxcbr0 outface eth0 masquerade route all accept interface any world protection strong #Outgoing these protocols are allowed to everywhere client "smtp pop3 dns ntp mysql icmp" accept #These (incoming) services are available to everyone server "http https smtp ftp imap imaps pop3 pop3s passiveftp" accept #Outgoing, these protocols are only allowed to known servers client "http https webcache ftp ssh pyzor razor" accept dst "${trusted_servers}" On my host I can connect only to "trusted servers" on port 80. In my guest I can connect to port 80 on every host. I assumed that firehol would block that. Is there something I can add/change so that my guest(s) inherit the rules of the eth0 interface?

    Read the article

  • Trouble editing ~/.bash_profile: -bash: $'\r': command not found

    - by Dave
    I installed CygWin on Windows 7. Using Notepad, I edited my ~/.bash_profile file to add on to the PATH variable … PATH="${PATH}:/cygdrive/c/apache-ant-1.8.2/bin" Now, when I SSH in to my Windows machine, I get this error … -bash: $'\r': command not found -bash: $'\r': command not found -bash: $'\r': command not found -bash: $'\r': command not found -bash: $'\r': command not found -bash: /home/dev/.bash_profile: line 39: syntax error: unexpected end of file and my PATH is not set. Anyone know how I can correct this?

    Read the article

  • Ubuntu server very slow out of the blue sky (Rails, passenger, nginx)

    - by snitko
    I run Ubuntu server 8.04 on Linode with multiple Rails apps under Passenger + nginx. Today I've noticed it takes quite a lot of time to load a page (5-10 secs). And it's not only websites, ssh seems to be affected too. Having no clue why this may be happening, I started to check different things. I checked how the log files are rotated, I checked if there's enough free disk space and memory. I also checked IO rate, here's the output: $ iostat avg-cpu: %user %nice %system %iowait %steal %idle 0.17 0.00 0.02 0.57 0.16 99.07 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvda 2.25 39.50 16.08 147042 59856 xvdb 0.00 0.05 0.00 192 0 xvdc 2.20 25.93 24.93 96530 92808 xvdd 0.01 0.12 0.00 434 16 xvde 0.04 0.23 0.35 858 1304 xvdf 0.37 0.31 4.12 1162 15352 Rebooting didn't help either. Any ideas where should I be looking?

    Read the article

  • Can I recover a nano process from a previous terminal?

    - by davidparks21
    My system crashed while I was in a nano session with unsaved changes. When I log back in via SSH I see the nano process still running when I do a ps. davidparks21@devdb1:/opt/frugg_batch$ ps -ef | grep nano 1001 31714 29481 0 18:32 pts/0 00:00:00 nano frugg_batch_processing 1001 31905 31759 0 19:16 pts/1 00:00:00 grep --color=auto nano davidparks21@devdb1:/opt/frugg_batch$ Is there a way I can bring the nano process back under my control in the new terminal? Or any way to force it to save remotely (from my new terminal)?

    Read the article

  • FreeNX 0.7.3 under CentOS 6.3 - Negotiating link parameters

    - by Frank
    since some days I try to get freenx (CentOS package 0.7.3) running under CentOS 6.3. It is like found on many websites: First login is successfull, after that all login attempts fail with the negotiation error. A simple ssh with the same username to the server is successful. For the installation I followed the howTo on http://wiki.centos.org/HowTos/FreeNX Strange is that the changelog of FreeNX 0.7.3 tells that this bug was fixed. Has anybody been successful in running FreeNX under CentOS without this problem and knows how to fix it? Frank

    Read the article

  • CentOS PAM+LDAP login and host attribute

    - by pianisteg
    My system is CentOS 6.3, openldap is configured well, PAM authorization works fine. But after turning pam_check_host_attr to yes, all LDAP-auths fail with message "Access denied for this host". hostname on the server returns correct value, the same value is listed in user's profile. "pam_check_host_attr no" works fine and allows everyone with correct uid/password a piece of /var/log/secure: Sep 26 05:33:01 ldap sshd[1588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=my-host user=my-username Sep 26 05:33:01 ldap sshd[1588]: Failed password for my-username from 77.AA.BB.CC port 58528 ssh2 Sep 26 05:33:01 ldap sshd[1589]: fatal: Access denied for user my-username by PAM account configuration Another two servers (CentOS 5.7 Debian) authorizes on this LDAP server correctly. Even with pam_check_host_attr yes! I didn't edit /etc/security/access.conf, it is empty, only default comments. I don't know what to do! How to fix this?

    Read the article

  • How to confirm php enabled on ubuntu server

    - by Shishant
    Hello, I am not much into linux. I am trying to setup a server through ssh. I installed apache php and mysql through this command. sudo aptitude install apache2 php5-mysql libapache2-mod-php5 mysql-server but I think php is not enabled on the server. when I run command I receive response as below $ which apache2ctl /usr/sbin/apache2ctl but when i check $ which php i receive no response. $ locate php5 /etc/apparmor.d/abstractions/php5 /usr/share/ubuntu-serverguide/html/C/php5.html available apache2 modules aptitude package manager

    Read the article

  • Running game server and webserver on EC2

    - by mazzzzz
    Hey guys, I have a webhost, and an EC2 server (to run a game server on). The problem is that I want to access/modify the EC2's files with php admin programs. I looked into a lot of options to just have the webhost communicate with the EC2 server (ssh, etc), but none of them panned out. My question is if I were to install a lightweight webserver (think lighttpd) on my EC2 server, how badly would it hurt the game server's performance? I was leaning away from this solution, even though the webserver (on the EC2 server) wouldn't get many hits (less than a 100 a day). Thanks for your thoughts, Max

    Read the article

  • How do I use a virtualbox guest machine as a gateway?

    - by Igor Zinov'yev
    I have a certain problem. I am working on an Ubuntu machine, but I have to use a windows 2003 server guest to connect to a Stonegate VPN to be able to manage our client's website. I have already asked if I could connect to a Stonegate VPN in Ubuntu, but so far got no answer. And I couldn't connect to it using network manager's strongswan plugin. So I want to use my guest Win2003 as a gateway to be able to SSH to the remote server. Is that possible? Thank you very much in advance, if this is possible in any way, it will save me a lot of trouble!

    Read the article

  • Hostname problems in CentOS 5.5

    - by spoon16
    I just set up a CentOS 5.5 machine on my local network and attempted to modify the hostname by editing /etc/sysconfig/network file. When I'm logged in locally the change to the hostname is reflected and seems to be working fine. When I open a SSH session via PuTTY from Windows this is what I see at the prompt: [root@? ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=mini.local [root@? ~]# sysctl kernel.hostname kernel.hostname = ? [root@? ~]# hostname ? [root@? ~]# hostname -f hostname: Unknown server error A couple of other symptoms that may be helpful in troubleshooting this problem. I can ping the CentOS box from my Windows machine via IP but not hostname. Also, my Netgear router does not display the hostname when I view the "Connected Devices", I do see the mac address and the proper IP listed though. How can I make it so that the hostname is properly propagated throughout my network?

    Read the article

  • How can I find a computer on my network that is doing mass mailings?

    - by Alex Ciarlill
    I was notified by my isp that one of my machines is sending out spam. This happened about 3 months ago on windows machine running cygwin that was hacked due to an SSH vuln. The hackers setup IIS and SMTP. I cleared out the machine and all the services are disabled so I think that machine is okay I am wondering if there is any other way to identify which machine it could be coming from? The ISP has NO useful information such as source port, destination port, destination IP... nothing. I am running DD-WRT on my router, Windows 7 PC and a Windows XP PC.

    Read the article

  • bash script - spawn, send, interact - commands not found error

    - by Sandeepan Nath
    I my shell script, I am trying to remove password prompt for scp command (as given in http://stackoverflow.com/questions/459182/using-expect-to-pass-a-password-to-ssh/459225#459225) and this is what I have so far :- #!/usr/bin/expect spawn scp $DESTINATION_PATH/exam.tar $SSH_CREDENTIALS':/'$PROJECT_INSTALLATION_PATH expect "password:" send $sshPassword"\n"; interact On running the script, I am getting errors spawn: command not found send: command not found interact: command not found I was also getting error expect: command not found also, then I realised the path to expect was not correct and expect was not installed at all. So, I did yum install expect, corrected the path and the error was gone. But not able to remove the other 3 errors still.

    Read the article

  • using Linux vncviewer

    - by Darkoni
    Hi ! when i am connecting to VNC server using wine on linux $ wine vncviewer.exe i have to enter: VNC Server: 1.1.1.21 Proxy/Reapeter: 195.29.18.33:1234 and then, when i connect, on top there is txt: 1.1.1.21:5900 (195.29.18.33:1234) mine question is: how to connect using vncviewer ? what to put in VNC_VIA_CMD ? $ export xlocalPort=1234 $ export xremoteHost=1.1.1.21 $ export xremotePort=5900 $ export xgateway=195.29.18.33 $ export VNC_VIA_CMD="/usr/bin/ssh -f -L $xlocalPort:$xremoteHost:$xremotePort $xgateway sleep 20" $ vncviewer $xremoteHost -via $xgateway and i get error: unable connect to socket: Connection refused (111) i was trying to help myself with page http://www.tightvnc.com/vncviewer.1.php Please help, couse i need to use "native" linux vncviewer installed by $ yum install tigervnc tigervnc.i686 0:1.0.90-0.13.20100420svn4030.fc13 Thnx

    Read the article

  • Input devices stopped working during system upgrade

    - by amorfis
    Hi, I was upgrading Ubuntu on my server (but Ubuntu is in desktop version) when mouse and keyboard stopped working :( So the screen went black (screensaver), and now I can't do anything. I don't know what stage of upgrade it stopped working, probably now it waits for me to answer some question. Keyboard and mouse were connected by KVM, connecting them directly doesn't help. Both are on USB. What I can do, is connecting to the machine by ssh. Can I somehow see and answer questions of update system and somehow finalize process of upgrade?

    Read the article

  • How do I set permissions structure for multiple users editing multiple sites in /var/www on Ubuntu 9

    - by Michael T. Smith
    I'm setting up an Ubuntu server that will have 3 or 4 VirtualHosts that I want users to be able to work in (add new files, edit old files, etc.). I currently plan on storing the sites in /var/www but wouldn't be opposed to moving it. I know how to add new users, I know how to add new groups. I'm unsure of the best way to handle users being only able to edit some sites. I read over the answers here in this question, so I was thinking I could setup a group and add users to that group, but then they'd all have essentially the same permissions. Am I just going to have to assign each user specific permissions? Or is there a better way of handling this? Added: I should also note, that I'll have each user login in via SSH/sFTP. The users would never need to do anything else on the server.

    Read the article

  • How do I Connect a 30yr-old Tandy 1400LT laptop to the internet?

    - by Clemens Bergmann
    Just for the fun of it, I want to get an old Tandy 1400LT laptop: small monochrome display two floppy drives rs-232c connector "printer" connector connect the thing the internet and use it as an ssh terminal. How would I connect it to the internet? The software should be no problem as it is a 386 hardware. There should be a small linux distribution which can be run on it. But how would I phisically connect the hardware? It has no ethernet port. Has someone experience with Serial/Paralel-to-ethernet converters?

    Read the article

  • Passing all traffic through Cloudflare

    - by Nick
    I am new to Linux System Administration and I am experimenting with iptables trying to learn how to really lock down a system with them. And one thing a friend of mine recommended was that there was a way to pass all incoming traffic through Cloudflare so even if attackers resolved the server ip they still couldn't (D)dos it directly. This is exactly what they said: "Simply config your servers iptables to only allow incoming connections from CloudFlares IP ranges then set it to allow only your IP/IP range to connect on port 21 (SSH)" Could someone help me on what command I'd need to run for Ubuntu to get this effect?

    Read the article

  • Ghosting context menu clicks in WinXP

    - by Swish
    Let me preface by saying I have a lot of windows open most of the time, although not resource intensive ones, just browsers, ssh sessions, a music player, FTP client, Notepad++, IM clkients, etc. Anyway, I get a lot of weird visual "ghosting" type effects. For example when right-clicking and then selecting an option from a context menu the selected item will remain in view until I right click somewhere on the desktop. Same thing happens when selecting items from the File, Edit, etc. menu in various programs. I'm assuming this is just a result of a less than high quality video card (NVIDIA GeForce FX 5200), all the other hardware in the machine is newer higher quality, that specific video card was added after the fact for multiple monitors. I have looked all over the web for solutions and have increased the number of GDI handles for Windows, reduced the hardware accelaration on the card, etc. Any suggestions other than replace the card?

    Read the article

  • Squid traffic tunneled through VPN

    - by NerdyNick
    So what I'm trying to do is have a Squid Proxy run on 1 machine along side a VPN connection. What I want to happen is all traffic running though the Squad Proxy would run though the VPN for its outbound. ie Desktop - (Squid Proxy - VPN) The goal is to allow my desktop selective tunneling through the VPN. So that Instant Messaging and the like that do not need to run through the VPN can go through my normal traffic. Typically I would go though a SSH Proxy but currently am forced to use VPN to gain entry into the office, and a Squid proxy seemed like it might work out the easiest for what I am needing. EDIT Realize I forgot to actually state what problem I'm running into. I have the Squid setup and verified it works, but once I connect to the VPN. All requests to Squid get accepted but Squid is unable to make the request over the VPN. So the client ends up just sitting there.

    Read the article

  • Steps to take when technical staff leave

    - by Tom O'Connor
    How do you handle the departure process when privileged or technical staff resign / get fired? Do you have a checklist of things to do to ensure the continuing operation / security of the company's infrastructure? I'm trying to come up with a nice canonical list of things that my colleagues should do when I leave (I resigned a week ago, so I've got a month to tidy up and GTFO). So far I've got: Escort them off the premises Delete their email Inbox (set all mail to forward to a catch-all) Delete their SSH keys on server(s) Delete their mysql user account(s) ... So, what's next. What have I forgotten to mention, or might be similarly useful? (endnote: Why is this off-topic? I'm a systems administrator, and this concerns continuing business security, this is definitely on-topic.)

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • Repartition Ubuntu by command line?

    - by DisgruntledGoat
    On my server the filesystem includes these partitions: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 929M 3.5G 21% / /dev/sda5 76M 20M 53M 27% /boot /dev/sda8 449G 199M 426G 1% /home /dev/sda7 4.6G 4.4G 0 100% /var (Output from df -ah) I'm storing the web sites and databases under /var and as you can see it's got full. The /home folder just has basic user directories and nothing else so I'd like to repartition the server so that /dev/sda8 is about 5GB, with the rest going to dev/sda7. What's the easiest way to do this via command line (i.e. SSH)?

    Read the article

  • OpenLDAP PAM authen does not support SSHA on FreeBSD10

    - by suker200
    OpenLDAP PAM authen does not support SSHA? Hi everyone, Now, I lost one day to figure out, the reason my FreeBSD10 can not authenticate SSH user via LDAP because pam_ldap and nss_ldap do not support SSHA password when OpenLDAP support SSHA method. I have checked /usr/local/etc/ldap.conf, they just have these pam_password methods: clear, crypt, nds, racf, ad, exop. So, If I switch to CRYPT, I can authenticate successfully. So, IMHO, I will be very appreciative for any point or suggestion from everyone to make my FreeBSD10 PAM support SSHA, is there any way or can not? Infor: Ldap Server (389 DS - Centos) - Ldap client (FreeBSD10) what I have got: authen via Ldap between Centos - Centos (Okie). Centos (Ldap Server) - FreeBSD failed (work if I using crypt) Thank and BR Suker200

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >