Search Results

Search found 90811 results on 3633 pages for 'hyper v server 2012 r2'.

Page 1664/3633 | < Previous Page | 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671  | Next Page >

  • Awstats messaging non existant user causing exim4 to go nuts

    - by Chris
    I've taken over managing a server set up by someone else now uncontactable, while managing to work out most faults / changes needed this one is stumping me. Awstats is running on the machine and sending messages via exim4 to a user every time it runs an update. The user account has been deleted and so the exim4 main log files are filling up with message delivery errors, which firstly hinders meaningful log analysis for anything else and secondly uses up quite a lot of space (it grew to 22GB unattended, panic!) I've been through all the conf files in /etc/awstats and can't seem to find any mention of this user account. Google just turns up results about how to use awstats to parse exim4 log files. So the questions is where is this setting (on debian) likely to be? Cheers in advance

    Read the article

  • Multiple IPs on Juniper SRX100 Untrust Port

    - by Will
    I am having trouble getting multiple IP addresses on the untrust port. I have tried a few different methods, but can't seem to get it to work. Does anyone have a good tutorial that is not easily found or if possible can type up the steps? I don't mind trying to do it through ssh, but would prefer web interface. Thank you ++++++++++++ Feb 1 fe-0/0/0 { unit 0 { family inet { dhcp { update-server; } } } } routing-options { static { route 0.0.0.0/0 next-hop 96.11.173.81; } } Right now it's setup to receive settings from 'cable modem' through dhcp, but I think it's only getting one IP.

    Read the article

  • Process for configuring network settings on a headless rack mount device

    - by PherricOxide
    I'm with a small company that plans to sell a rack mounted network appliance which is configurable via a web interface (think of a router configuration page sort of deal), and I'm wondering in large data center like environments what the process usually is for the initial setup of such systems. The main question is, if the system is headless, how do you get initial remote access to it? Do companies usually first plug a server into a monitor/keyboard/mouse in order to configure the network settings before mounting it in a rack? How else would they know what the IP address of the machine was if DHCP (and it can't be hard coded because of IP conflict potential)?

    Read the article

  • AWS:EC2:: Why my web folder is called "html"??

    - by heathub
    P.S Q stands for Question. My environment is: Amazon linux 64 bit (Q1. i dont if its ubuntu or red-hat, is there any way to check?) And I need to run php and mysql, thus I installed httpd (Q2. is httpd == apache??), but on my default page, it says: please upload files to /var/www/html folder. Q3.This is the first time I set aws ec2 server myself, my previous experience is hosting with hosting company. Normally in hosting company, my web directory is called "www" or "public_html" or "htdocs".Why is my folder name is "/var/www/html"? Am I installed wrong apache?

    Read the article

  • why would you create two different subnets on the same physical network?

    - by xirtyllo
    I'm working at a messy location, one of the strange (for me) things is that on the same physical network there are two different subnets. Specifically some computers will have 10.0.0.0/24 and some others will have 172.16.0.0/24. There is only one DHCP server, which gives IPs on the 10.0.0.0/24 range, and there are two internet gateways, one with IP 172.16.0.1 and one with IP 10.0.0.1 . To give an example, I can easily swap one PC from one subnet to the other just by changing its IP and gateway settings. I am trying to imagine why they created the network this way, and which may be the possible advantages and/or drawbacks of having two different subnets on the same physical network. Any thoughts?

    Read the article

  • PHP make install seems to end abruptly and does not update libphp5.so

    - by matt74tm
    I'm trying to compile PHP 5.3.3 and after a lot of ups and downs, I finally did 'make' it followed by 'make install' which just shows this: root@server [/tmp/php-5.3.3]# make install Installing PHP SAPI module: cgi Installing PHP CGI binary: /usr/bin/ Installing PHP CLI binary: /usr/bin/ Installing PHP CLI man page: /usr/share/man/man1/ Installing shared extensions: /usr/lib64/20090626/ Installing build environment: /usr/lib64/build/ Installing header files: /usr/include/php/ Installing helper programs: /usr/bin/ program: phpize program: php-config Installing man pages: /usr/share/man/man1/ page: phpize.1 page: php-config.1 /tmp/php-5.3.3/build/shtool install -c ext/phar/phar.phar /usr/bin ln -s -f /usr/bin/phar.phar /usr/bin/phar Installing PDO headers: /usr/include/php/ext/pdo/ It does not look like its done, because /usr/lib64/httpd/modules/libphp5.so still shows an old date: -rwxr-xr-x 1 root root 3193768 Mar 31 2010 libphp5.so

    Read the article

  • Remote Access Problems with DRAC 5 on Dell PowerEdge 1950

    - by Darin Peterson
    Today I received my first Dell PowerEdge 1950 server with a DRAC 5 card. On my local network I have static configurations on my Linux systems using this for instance: iface eth0 inet static address 192.168.1.210 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 8.8.8.8 8.8.4.4 For the DRAC card, I configured the LAN like this: address 192.168.1.215 netmask 255.255.255.0 gateway 192.168.1.1 For the advanced LAN settings I used dns-nameservers 8.8.8.8 8.8.4.4 I've tried many different IP addresses, but cannot communicate with the card. Is there anyone who might know if I have configuration issues, or maybe if the card might be bad?

    Read the article

  • Ping / Trace Route

    - by dbasnett
    More and more I see programmers developing web based applications that are based on what I call "ping-then-do" mentality. An example would be the application that pings the mail server before sending the mail. In a rather heated debate in another forum I made this statement, "If you are going to write programs that use the internet, you should at least have a basic idea of the fundamentals. The desire to "ping-then-do" tells me that many who are, don’t." On this forum and at Stack Overflow I see numerous questions about ping / trace route and wonder why? If it is acceptable to have a discussion about it here I would like to hear what others think. If not I assume it will be closed rapidly.

    Read the article

  • User not found for cn=config in OpenLDAP?

    - by Nick
    We're running OpenLDAP on Ubuntu 10.04. I'm able to access and use the front end with cn=admin,dc=ourcompany,dc=com and my password. But I'm unable to change the server's configuration (like loglevel) stored in cn=config because I don't seem to have a valid user/password for the backend? Some examples: # ldapsearch SASL/DIGEST-MD5 authentication started Please enter your password: ldap_sasl_interactive_bind_s: Invalid credentials (49) additional info: SASL(-13): user not found: no secret in database or # ldapadd -x -D "cn=admin,cn=config" -W -f "my.ldif"" Enter LDAP Password: ldap_bind: Invalid credentials (49) How do I create a user for the cn=config backend?

    Read the article

  • How can i resolve all external addresses to internal address?

    - by Darian
    I am currently setting up a Linux server for a WIFI access-point. When ever someone who is connected to the hotspot/access-point? tries to reload a page they get forced onto the one page. Note: this wont have internet access! ie: user tries accessing www.google.com = it returns 192.168.1.200 or example.domain I've read that "dnsmasq" can be used to redirect any external addresses to an internal address. but haven't had any luck. Anyone have an example of a config for "dnsmasq"? I have also read that this can be done through a proxy?

    Read the article

  • What are some good, free tools to run automated security audits for PHP code?

    - by James Simpson
    I've been looking for some time now and have come up short. The most promising I found was Spike PHP, which seems to no longer work. I'm looking to scan my code for potential risks of SQL Injection, XSS, etc. I've gone through most of my code manually, but with a few hundred thousand lines of code, I'm sure I missed things. If possible, are there any tools that can be downloaded and analyze code on my local machine rather than installing to the live server (this isn't a requirement if not)?

    Read the article

  • Set-up SSHD to handle multiple key pairs.

    - by Warlax
    Hey guys, I am trying to set up my sshd to accept users that do not have a system user account. My approach is to use DSA public/private key pairs. I generated a key pair: $ ssh-keygen -t dsa I copied id_dsa.pub to the server machine where sshd runs. I appended the line from id_dsa.pub to ~/.ssh/authorized_keys of the single existing system user account I will use for every 'external' user. I tried to ssh as the 'external' user into the machine where I set-up the authorized_keys and failed miserably. What am I missing here? Thanks.

    Read the article

  • Performance difference between compiled and binary linux distributions/packages

    - by jozko
    I was searching a lot on the internet and couldn't find an exact answer. There are distros like Gentoo (or FreeBSD) which does not come with binaries but only with source code for packages (ports). The majority of distros uses binary backages (debian, etc.). First question: How much speed increase can I expect from compiled package? How much speed increase can I get from real world packages like apache or mysql? i.e. queries per second? Second question: Does binary package means it does not use any CPU instructions that was introduced after first AMD 64bit CPU? With the 32bit packages does it mean that the package will run on 386 and basically does not use most of the modern CPU instructions? Additional info: - I am not talking about desktop, but server environment. - I dont care about compile time - I have more servers, so speed increase more than 15% is worth for using source code packages - Please no flamewars. Thank you very much

    Read the article

  • creating proper vpn tunnel, when both LANs have the same addressing

    - by meta
    I was following this tutorial http://wiki.debian.org/OpenVPN#TLS-enabled_VPN and this one http://users.telenet.be/mydotcom/howto/linux/openvpn.htm to create openvpn connection to my remote LAN. But both examples assumed that both LANs have different addresses (ie 192.168.10.0/24 and 192.168.20.0/24, check out this image i.stack.imgur.com/2eUSm.png). Unfortunately in my case both local and remote lan have 192.168.1.0/24 addresses. I am able to connect directly on the openvpn server (I can ping it and log in with ssh), but I can't see other devices on the remote LAN (not mentioning accessing them via browser which was the point from the first place). And don't know if the addressing issue may be the reason of that? If not - how to define routes, so I could ping other devices in remote LAN?

    Read the article

  • How to directly send *.php.html files to browser without passing throught PHP? (Apache)

    - by Cédric Girard
    Hi, on my Apache/PHP server, file test.php.html is parsed by PHP, while test.html is not. PHPDOC create a lot of *.php.html files, with a XML header wich is a pain for PHP parser, but how to tell Apache not to pass *.php.html file to PHP and just send back the file to browser? My php.conf file <IfModule prefork.c> LoadModule php5_module modules/libphp5.so </IfModule> <IfModule worker.c> LoadModule php5_module modules/libphp5-zts.so </IfModule> AddHandler php5-script .php AddType text/html .php What can I do? Thanks, Bests regards Cédric

    Read the article

  • Where to begin with IPv6 [closed]

    - by Willem de Vries
    I am fairly familiar with setting-up IPv4 networks for bigger server configurations, only now I wanted to start familiarizing myself with doing the same for IPv6. I have been Googling for the second night in a row for things like: IPv6 network design, IPv6 for dummies, etc. So far most things you find go on about why IPv6 and the amazing amount of numbers that we have now. Yet I am looking for practical stuff, for example: what would be a good way to assign IP-number, as I understand it DHCP shouldn't be the default course of action. How do other assignment methods work with DNS configuration? what would be a good or standard way of dividing the network in to sub-nets? (database, application, web servers spread over multiple domains/applications and some what intertwined) In short I would like to find good resources with practical information books, webpages, etc.

    Read the article

  • how do i set index priorty on nginx in order to load index.html before wordpress' .php files

    - by orbitalshocK
    hello there, gents. I'm an absolute beginner in linux, the CLI, as well as nginx and wordpress. i'm trying to make a 'coming soon' landing page that will take priority over the main wordpress installation i just set up. I want to make .html load before php, or get information on the Best Practices approach to this. I just now realized i could easily use the wordpress' generic "under construction" page and modify it. I'm sure it has one; i'm sure there's a plugin. Stats linode 1024 ubuntu 12.04 nginx 1.6.1 single wordpress installation (for now) set up using easyengine, but going to restart and configure nginx for my linode specifically probably. I managed to find one piece of instruction on how to change the httpd file to specify priority for apache 2, but did not find the same documentation for nginx. If it's not on the first page of google, then serverfault needs the question answered! Viva la Server Fault first page results!

    Read the article

  • Is it possible to disable the retry mechanism in Exim

    - by Tony Meyer
    I have a very simple Exim configuration that's just forwarding all mail to a set of destination addresses. When immediate delivery to an address fails, the message is added to the queue (and then processed by the retry rules). I want to change this so that if immediately delivery fails, the message is :blackhole:d. (It's ok if a bounce is generated instead, as I'll just redirect the bounce to the :blackhole:). This needs to occur for temporary failures (i.e. 4xx) as well as permanent (i.e. 5xx) ones. I understand that this means that if delivery can't be done immediately the message will be permanently and irretrievably lost. In this particular context, that isn't a problem. Reading over this, it sounds suspiciously like "how can I improve my spamming Exim server". That really isn't what this is for, and if you can figure out a way I can prove that, I'm happy to do so!

    Read the article

  • Running command transparently over ssh

    - by jnsg
    By transparently I mean forwarding of: stdin, stdout and stderr standard signals (SIGHUP or SIGINT would be great for a start) As an example, consider these invocations of a (pointless) local and remote command: $ `cat - > /dev/null; sleep 10` < /local/file $ ssh user@host "cat - > /dev/null; sleep 10" < /local/file I can interrupt the first one with ^C just fine. But if I try this during the second one it only affects ssh, leaving the command running on the remote server if cat has already finished. I know about launching sshwith -t, but this way I can't send data via stdin. Is this possible with ssh alone at all?

    Read the article

  • restricting access only through domains on nginx on virtual hosts

    - by Mo J. Mughrabi
    I have finished setting up nginx for virtual hosting, this is how my config files look like server { listen 80; server_name domain.com; access_log /home/domain.com/prod_webapp/logs/access.domain.com.log; error_log /home/domain.com/prod_webapp/logs/error.domain.com.log; location /static { root /home/domain.com/prod_webapp/mocorner/ph/; } location / { try_files $uri @uwsgi; } location @uwsgi { include uwsgi_params; uwsgi_pass unix:/tmp/domain_uwsgi.sock; }} on the same machine, I have domain1.com and domain2.com, each when i access I get its content which is great. My problem is that when i try to access the user using the IP address i get one of the sites in the virtual hosts too.. Although i disabled the default (removed the symbolic link) from sites-enabled folder but still not solved it for me. any suggestions?

    Read the article

  • My IPSec VPN isn't allowing me to connect

    - by jbondhus
    I'm following this guide to create an IPsec VPN on a debian server. I followed all the steps, and it still isn't working. If I try connecting to it, I get an error. I've looked for the logs, but I'm not sure where they would be put, other than /var/logs. I aim to be able to browse the internet from my home connection, and access my home network as if it is my local network (eg. access my home fileserver and printers, and screen share my home computers remotely without a complicated port forwarding setup.

    Read the article

  • Configure apache to reverse proxy for specific name

    - by Phrogz
    I have a working intranet server that: Properly serves some content from http://hqmktgwb01/ Is currently properly configured to reverse proxy from http://hqmktgwb01/dashstats to a round-robin of localhost:3000 - localhost:3003 Also has the DNS name dashstats (going to the same IP) The current working configuration file can be found here: http://pastie.org/1426082 I would like to modify the configuration so that:    4. http://dashstats/ performs the same reverse proxying http://hqmktgwb01/dashstats. I (naively) modified the config like this: http://pastie.org/1426047 (added lines 90-98) but this is not a valid Apache config. Please help me to modify the original config file to accomplish 1-4 above.

    Read the article

  • Using client certificates with wget

    - by Doc
    I cannot get wget to use the client certificates. The documentation speaks about using the --certificate flag. The use of the certificate flag is clear, I set it to use the PEM version of the client certificate. But when I connect I get the following error: HTTP request sent, awaiting response... Read error (error:14094410:SSL routines: SSL3_READ_BYTES:sslv3 alert handshake failure; error:140940E5:SSL routines:SSL3_ READ_BYTES:ssl handshake failure) in headers. Giving up. ssl handshake failure means the client did not supply a correct client cert. Still the client cert I use, works in a browser. Note: When I disable client authentication on the server, wget can connect. Note: The use of curl is suggested, but I'd like to avoid the switch.

    Read the article

  • Is it possible to export Windows event logs from multiple servers to a non-windows host, without running event manager on each of the Windows servers?

    - by Taylor Matyasz
    I want to export event logs from Windows to a non-Windows host. I was considering using Logstash, but that would seem to require that I install and run Logstash on each server. Is it possible to do this without having to run it on all of the servers? I am hoping to be able to consolidate all of the information from different servers to make searching and reporting much easier. If not, what would you recommend is the best way to export to a non-Windows host in real time? Thank you.

    Read the article

  • Configuring a Jetty web application on a different port

    - by sHz
    Hi folks, I'm brand new to Jetty. I'd like to ask if its possible to have Jetty listening on port 8080, however where specified, serve a specific web application under say /var/jetty/webapps/<appname> (default on CentOS) served on say port 10000 instead of http://localhost:8080/<appname> i.e. http://localhost:10000/ = http://localhost:8080/<appname&gt; ? If so, what configuration changes would be required to make this work without an additional proxy server? I've googled away, but haven't found a solution (perhaps I've missed something obvious?).

    Read the article

< Previous Page | 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671  | Next Page >