Search Results

Search found 91112 results on 3645 pages for 'media server'.

Page 1435/3645 | < Previous Page | 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442  | Next Page >

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • apache2: ssl_error_rx_record_too_long when visiting port 80? help!

    - by John
    Hi, I have an Ubuntu 10 x64 server edition machine. I got a second IP and configured /etc/network/interfaces like so (actual IPs and gateways removed): [code] auto lo iface lo inet loopback iface eth0 inet dhcp auto eth0 auto eth0:0 iface eth0 inet static address [ my first IP ] netmask 255.255.255.0 gateway [ my first gateway ] iface eth0:0 inet static address [ my second IP ] netmask 255.255.255.0 gateway [ my second gateway ] [/code] /etc/apache2/ports.conf: [code] Listen 80 NameVirtualHost [ my first IP ]:80 NameVirtualHost [ my second IP ]:80 # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 NameVirtualHost [ my first IP - some site is running SSL successfully using it ]:443 Listen 443 [/code] /etc/apache2/sites-enabled/mysite.conf: [code] ServerName mysite.com Include /var/www/mysite.com/djangoproject/apache/django.conf [/conf] [/code] Then when visiting http[mysite].com:80 or http[mysite].com (:// removed because serverfault doesn't allow me to post hyperlinks), I get: [code] An error occurred during a connection to [mysite].com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) [/code] My guess is that the configuration file is not being picked up, and apache is therefore looking for the default-ssl file, which is not in conf-enabled. If I were to configure that file properly, it seems I would successfully connect to whatever default directory is specified in the default-ssl file. But I want to connect to my website. Any ideas? Thanks in advance!

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • Cannot establish a network connect to VMWare Fusion VM

    - by twolfe18
    I am running Snow Leopard 10.6.2 (not the server edition) with VMWare Fusion 3.0.0 and I trying to get my Ubuntu 9.10 x86_64 VM working. I am using a bridged connection, and I have an internet connection FROM the Ubuntu VM (I can download updates, ping websites, etc), but I cannot connect TO the Ubuntu box from any other device on my network. I am trying to get Mongrel up on the Ubuntu VM for some Rails stuff, but it's not working. I know Mongrel/Rails is not the problem because if I start the server on the Ubuntu VM, background the process, and then wget the index page, it works. I just cannot connect to the site from another IP. I have tried using a static IP and a DHCP IP configuration on the Ubuntu VM, neither work (for incoming connections, both work for outwards). I have port scanned the Ubuntu VM, and it appears that all ports are closed. However, the Ubuntu VM does respond to pings. I noticed a similar question here: http://serverfault.com/questions/99757/setting-up-a-bridged-network-with-vmware-fusion, but no answer. Any ideas?

    Read the article

  • Incomplete Apache logging

    - by Manz
    I have a problem with Apache running on a Linux server. This error undefined index on PHP, for example. The problem is that my Apache server doesn't log entire error messages. Some lines from the error.log file: [Thu Nov 29 05:29:06 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: lin [Thu Nov 29 05:29:06 2012] [warn] mod_fcgid: stderr: 9 [Thu Nov 29 05:31:30 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/html/sit [Thu Nov 29 06:01:18 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var [Thu Nov 29 06:06:09 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined [Thu Nov 29 06:06:15 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: [Thu Nov 29 06:13:04 2012] [warn] mod_fcgid: stderr: PH [Thu Nov 29 07:14:16 2012] [warn] mod_fcgid: stderr: PHP Notice: Undef [Thu Nov 29 07:32:16 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/ht [Thu Nov 29 07:34:26 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link [Thu Nov 29 07:34:30 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/html/site.com/ [Thu Nov 29 07:41:10 2012] [warn] mod_fcgid: stderr: PHP Notice: Und [Thu Nov 29 07:41:11 2012] [warn] mod_fcgid: stderr: PHP Notice: Und [Thu Nov 29 07:41:12 2012] [warn] mod_fcgid: stderr: PHP Notice: Und [Thu Nov 29 08:14:20 2012] [warn] mod_fcgid: stderr: PHP Notice: Undef [Thu Nov 29 12:36:54 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: li [Thu Nov 29 12:37:04 2012] [warn] mod_fcgid: stderr: PHP Notice: Unde [Thu Nov 29 12:46:52 2012] [warn] mod_fcgid: stderr: PHP Notice: Undefined index: link in /var/www/htm [Thu Nov 29 13:00:33 2012] [warn] mod_fcgid: stderr: line 35 [Thu Nov 29 13:10:55 2012] [error] [client XXX.XX.XX.XX] File does not exist: /var/www/h Some lines are incomplete and truncate the error message. Anyone know Why Apache is saving incomplete error messages?

    Read the article

  • Centos: installing SVN tells I don't have perl 1.17. I have installed 5.8

    - by Emerson
    I'm trying to install SVN on a CentOS virtual machine. I used the command that the CentOS wiki tells: http://wiki.centos.org/HowTos/Subversion yum install mod_dav_svn subversion It gives me a few errors: --> Finished Dependency Resolution mod_dav_svn-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: httpd-mmn = 20051115 is needed by package mod_dav_svn-1.4.2-4.el5_3.1.i386 (base) subversion-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: httpd-mmn = 20051115 is needed by package mod_dav_svn-1.4.2-4.el5_3.1.i386 (base) The thing is that I have Perl 5.8 installed: root@server [~]# rpm -q perl perl-5.8.8-32.el5_5.2 I also don't know why it tells httpd-mmn isn't installed. I have apache installed for sure. From what I read here, it seems I would need to recompile apache. www.sitepoint.com /forums/showthread.php?t=485683 Any ideas? Update: I also tried to install subversion via WHM (11.28.35) and it gives me the same error. By the way, WHM it says: CENTOS 5.5 i686 virtuozzo on server

    Read the article

  • Why does `rpm` show 3 httpd packages, and which one provides the real httpd?

    - by Stefan Lasiewski
    I ran yum update on my CentOS5 webserver a few days ago. Today I just noticed that I have 3 httpd-* rpms! How can I end up with three RPMs for httpd (My other servers only have one httpd rpm). I want to make sure that my server has a patched, updated version of /usr/sbin/httpd. How can I tell which one of these packages provides the httpd binary at /usr/sbin/httpd? [root@node1 ~]# rpm -q httpd httpd-2.2.3-76.el5.centos httpd-2.2.3-78.el5.centos httpd-2.2.3-83.el5.centos [root@node1 ~]# /usr/sbin/httpd -V |grep version Server version: Apache/2.2.3 [root@node1 ~]# rpm -q httpd-2.2.3-76.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# rpm -q httpd-2.2.3-78.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# rpm -q httpd-2.2.3-83.el5.centos --list |grep -w /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd.event /usr/sbin/httpd.worker [root@node1 ~]# root@node1 ~]# rpm -q --provides httpd |grep -w httpd config(httpd) = 2.2.3-76.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-76.el5.centos config(httpd) = 2.2.3-78.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-78.el5.centos config(httpd) = 2.2.3-83.el5.centos httpd-mmn = 20051115 httpd = 2.2.3-83.el5.centos Update: Answering Mark Wagner's questions: [root@node1 ~]# rpm -q -f /usr/sbin/httpd httpd-2.2.3-76.el5.centos httpd-2.2.3-78.el5.centos httpd-2.2.3-83.el5.centos [root@node1 ~]# rpm -V httpd-2.2.3-83.el5.centos S.5..... c /etc/logrotate.d/httpd S.5..... c /etc/rc.d/init.d/httpd ....L... /var/www

    Read the article

  • Ruby on Rails (Redmine) on Apache - 503 Error

    - by andrewtweber
    I am running a Ruby on Rails application called Redmine. It's been working fine, but today it's giving a 503 Service Temporarily Unavailable error. (It was initially set up by an employee who is now gone.) I check the error log and it says: [Mon Nov 21 11:03:30 2011] [error] (111)Connection refused: proxy: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed [Mon Nov 21 11:03:30 2011] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1) Here's a chunk of my Apache config <VirtualHost *:80> ServerName redmine.{domain}.com RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://redminecluster%{REQUEST_URI} [P,QSA,L] </VirtualHost> <Proxy balancer://redminecluster> BalancerMember http://127.0.0.1:3000 </Proxy> I found this link: http://www.redmine.org/boards/2/topics/20561 which suggests I simply need to "start the redmine server." I've tried /etc/init.d/redmine start which gives me this output => Booting Mongrel => Rails 2.3.11 application starting on http://0.0.0.0:3000 The contents of /etc/init.d/redmine: cd /var/redmine sudo ruby script/server -d -e production One thing I immediately notice is that it says 0.0.0.0 instead of 127.0.0.1. In addition, running top or ps -ef shows no record of a "mongrel" or "redmine" process. I've also tried restarting Apache before and after starting redmine. Not sure where to go from here.

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: wireless Guest network adapter 1: over Bridge VMNet (automatic) Guest network adapter 2: over Host only VMNet Problem When I surf the net within VM my internet connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitelly but connection won't pickup automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • Static file serving only works if root is a subfolder under public

    - by lulalala
    I am trying to serve static cache files using nginx. There are index.html files under the rails_root/public/cache directory. I tried the following configuration first, which doesn't work: root <%= current_path %>/public; # $uri always contains one slash(the first slash but not the last) try_files /cache$uri/index.html /cache$uri.html @rails; This give error: [error] 4056#0: *13503414 directory index of "(...)current/public/" is forbidden, request: "GET / HTTP/1.1" I then tried root <%= current_path %>/public/cache; # $uri always contains one slash(the first slash but not the last) try_files $uri/index.html $uri.html @rails; And to my surprise this works. Why is it that I can do the latter not the former( since they point to the same location) The permissions of the folders are: 775 public 755 cache 644 index.html The thing is that my favicon sitting under public/ is served correctly: # asset server server { listen 80; server_name assets.<%= server_name %>; expires max; add_header Cache-Control public; charset utf-8; root <%= current_path %>/public; }

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • where can I find the user in this IIS error 'Login failed for user 'IIS APPOOL\Web2'

    - by Jack
    I encounter the following error: Cannot open database "testbase" requested by the login. The login failed. Login failed for user 'IIS APPPOOL\Web2'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Cannot open database "testbase" requested by the login. The login failed. Login failed for user 'IIS APPPOOL\Web2'. So, where can I give this user Web2 permission? (By the way, the server do not have such user Web2 but there is a folder called Web2 located at the wwwroot folder.) I search for answers but all failed as follows: [1] Add the user IUSR to the folder and give it read permission. [2] http://www.codekicks.com/2008/11/cannot-open-database-northwind.html [3] http://blog.sqlauthority.com/2009/08/20/sql-server-fix-error-cannot-open-database-requested-by-the-login-the-login-failed-login-failed-for-user-nt-authoritynetwork-service/

    Read the article

  • Can you make a Windows network default user profile NOT apply to a certain operating system?

    - by Jordan Weinstein
    I would like to create a network Default User account for Windows 7 only. This is on a Windows 2003 domain with servers from Windows 2000 to 2008 R2 and Windows XP on workstation side. We're about to do a full migration to Windows 7 and I'd like to start using the network default user profile functionality as we're not migrating user profiles over. Want everyone to start clean. I followed the simple steps from this page: http://support.microsoft.com/kb/973289 under the heading: "How to turn the default user profile into a network default user profile in Windows 7 and in Windows Server 2008 R2" but the problem is that profile would then apply to a new user\admin logging into a 2008 server. That's no good. Anyone have any ideas on how to limit what actually uses that network profile? I was thinking about setting deny permissions for all my admin\service accounts on that "\\dcserver\netlogon\Default User.v2" folder but then it might be timing out and cause other problems. Haven't tried yet as that seems like a bad way of making this work.

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Exim and receiving email with large recipient lists

    - by AceJordin
    I have Exim4 running on Debian configured to receive mail on multiple domains. Exim is set to forward all email that is received to one of the domains to another box. This box is configured with a catchall mailbox that everything goes in. My issue is that when an email is sent to the domain, which contains a large amount of addresses (all to the same domain, but different users), Exim will receive the single email over multiple connections. This means that the catchall mailbox receives multiple copies of the single email all containing the full recipient list. For example, I was able to reproduce it by sending an email from my gmail account that contained 500 recipients (eg [email protected]; [email protected]; [email protected]; etc. for a total of 500). Exim received the message as 20 messages (25 recipients per; appears to be a gmail server setting). So the catchall mailbox received 20 messages, each containing all 500 addresses. I'm pretty sure I understand why this is happening but is there any way I can configure Exim to only receive it once, or to combine it into one? Is there anything that can be done on my end, or am I at the mercy of the sending email server? This is causing havoc with a process that polls the catchall mailbox and parses each recipient in each email.

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • no disk io but iowait very high

    - by Dan
    there is no disk io going results of iotop Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO< COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init [3] 1930 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1931 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1932 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1933 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1810 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sh /usr/b~user=mysql 9795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 8004 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 3226 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % tlsmgr -l -t unix -u 8154 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 9759 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % find -name php.ini 9249 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 1756 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1863 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 3123 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % crond 1758 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1865 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 1592 be/4 sw-cp-se 0.00 B/s 0.00 B/s 0.00 % 0.00 % sw-cp-ser~ver/config 7612 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd: root@pts/0 7614 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sftp-server 7615 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % -bash 1602 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd 8003 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd but iowait very high ? iostat report avg-cpu: %user %nice %system %iowait %steal %idle 0.83 0.00 0.13 13.83 0.00 85.20 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn server runs like a snail what could be wrong here ? thanks

    Read the article

  • xampp apache on windows 7 returns http header only

    - by bumperbox
    i am having issues with xampp running on windows 7 RC32 i type in a localhost and get a header back only, no page content somedays it works fine, other days i can't get it to work after multiple attempts, reboot or otherwise the request doesn't even get put into the acccess log which seems unusual here is the log file at startup incase that helps any ideas ?? [Wed Sep 09 12:27:08 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:08 2009] [notice] Digest: done [Wed Sep 09 12:27:09 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:09 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:09 2009] [notice] Parent: Created child process 2500 [Wed Sep 09 12:27:10 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:10 2009] [notice] Digest: done [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Child process is running [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Acquired the start mutex. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting 250 worker threads. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 443. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 80. [Wed Sep 09 12:27:15 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Sep 09 12:27:15 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:15 2009] [notice] Digest: done [Wed Sep 09 12:27:16 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:16 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:16 2009] [notice] Parent: Created child process 3252 [Wed Sep 09 12:27:17 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:17 2009] [notice] Digest: done [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Child process is running [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Acquired the start mutex. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting 250 worker threads. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 443. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 80.

    Read the article

< Previous Page | 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442  | Next Page >