Search Results

Search found 16784 results on 672 pages for 'white box'.

Page 531/672 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • compile kernel 2.6.34 for Ubuntu Lucid for xen dom0 / pvops

    - by andreash
    Hi there, I'd like to compile a recent Linux kernel (2.6.34) for my Ubuntu 10.04 Lucid Lynx AMD64 box, mainly because I'd like to use it as a dom0 kernel with the recent xen4. There's plenty documentation on the web about how to compile a kernel 'Debian style'. But what I think would be nice to start with an 'official' Ubuntu config to be sure not to miss any important things and having to recompile over and over again. So what I'd like to do is compile 2.6.34, but starting with the 'official' /boot/config-2.6.32-XX from Ubuntu Lucid. The question is: How do I best do that? If I just take the config from 2.6.32, the new features from 2.6.33/34 won't be in the config. So what I'd like to do is somehow the 2.6.34 config with the original 2.6.32 one from Ubuntu. How can I best do that? Does it even make sense? Is there easier ways to achieve what I want? Thanks for your insight! A. PS: I just found a linux-image-2.6.32-bpo.4-xen-amd64 package on backports.org, but no information about it. Would it work as a dom0 kernel on Lucid?

    Read the article

  • how to automatically mount ~/Private using ecryptfs when logging in via ssh pubkey

    - by andreash
    Raionale: I want to be able to automatically make backups to a remote machine, which will be encrypted with ecryptfs. The title says it all: I set up ecryptfs-utils on my Debian Squeeze box, and set up one user to use it via ecryptfs-setup-private. When I log in via SSH using password authentication, the ~/Private directory automatically gets mounted. How can I achieve that ~/Private also automatically gets mounted when logging in via SSH using public key authentication? Obviously, the best solution would be if ecryptfs could somehow 'use' the SSH public key to en/decrypt the data (I know that then using the user's password would not be able to en/decrypt the data any more; this would be acceptable). Probably, this will not work. So perhaps somehow call ecryptfs-mount-private via ssh before logging in via public key? Probably, then I would need to somehow pipe the passphrase through the SSH connection, right? So I would need to store it on the source machine's file system. Not nice either. Any other ideas?

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • How will I support 100,000 requests an hour?!

    - by tylerl
    I know this question is a little strange but I got lucky with an idea and I need some numbers to use for when I try to make a deal with a company. I'm wondering how much it'll cost me to run a site that's heavy on PHP and gets between 70,000 and 100,000 requests an hour on something like Rackspace's Cloud Servers. I have no idea how many servers I need or how much RAM each one should have. There will be a decent number of images on the site (probably something like 10,000 in the first couple weeks) and the site runs on about 2,500 lines of PHP code. I figure I should sign up for a CDN of some kind, although CDN In A Box is all I've heard of and I'm not sure it's necessary for a site that's already on a cloud platform. I've obviously never done anything like this before so I'm just looking to get an estimation of what I need for this massive site... Also, I use a database and I was wondering how that works - would I dedicate one of the cloud servers to running the database or would I need to put the database into each of the cloud servers? Thanks in advance...

    Read the article

  • vhost configuration for owncloud

    - by Razer
    I'm using apache2 for hosting owncloud. I configured a vhost file for owncloud, but every time I go on the site my browser downloads a ruby file. Here is my vhost configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName http://rsserver.fritz.box DocumentRoot /var/www/owncloud/ <Directory /var/www/owncloud/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Apache error log tells me: [Sat Jun 16 20:46:04 2012] [error] [client xx.xx.xx.xx] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/owncloud/core/templates/403.php mod_rewrite is enabled. Where is the problem?

    Read the article

  • Move an existing RAID 5 array from Ubuntu to Gentoo

    - by Cocoabean
    I have a 64-bit Ubuntu machine with a 4-disk RAID 5 using software raid (md). I've been able to boot an Ubuntu LiveCD and recognize the array with a simple mdadm -A /dev/md0. It was easy to mount after that and nothing had to rebuild. I'm installing Gentoo on this box now (multi-boot, non-RAID root partition) and I have md auto-detect turned on in the kernel. When I boot Gentoo I get: "invalid superblock magic on sdd" for each of the drives in the array. I boot back to Ubuntu and they mount no problem. I tried copying the mdadm.conf that works in Ubuntu to Gentoo, and then ran mdadm -A /dev/md0 but it reports that there is no array named md0. I don't want to lose data (obviously) and I don't want to have to let the RAID rebuild every time I switch between OSes. Any help is appreciated. Both are using mdadm 3.1.4 Both are running 64-bit kernels. mdadm -D /dev/md0 from Ubuntu yields: http://pastebin.com/5gj2QNkV UPDATE: After rebooting I noticed that it still complains about invalid blocks, but cat /proc/mdstat shows an inactive /dev/md127 with the same disks as my raid. I want to mount it but I don't want to get stuck waiting for a rebuild or destroying it inadvertently. mdadm -D /dev/md127 Here is pastebin of mdadm -D /dev/md127 on gentoo: http://pastebin.com/gDCWn0Rn UPDATE II: dmesg output about 'invalid raid superblocks' http://paste.ubuntu.com/885471/ fdisk -l from Ubuntu, /dev/md0 does not have any partitions but I do have it mounted and accessible: http://paste.ubuntu.com/885475/

    Read the article

  • Conky starts above windows in Ubuntu Maverick

    - by DesertIvy
    Hey guys, I did not run into this problem until I upgraded my Ubuntu box to Maverick Meerkat (10.10). Basically, whenever I start my computer, conky runs as expected, except it gets drawn over any windows that I load (see screenshot). To fix this for a single session, I simply restart conky by running killall conky; conky in a terminal. Conky gets re-drawn below active windows (namely, only appearing on my desktop), and does not have the border/drop-shadow, but I have to do this every time I start a new session. Is there a simple way to fix this? I have a small shell script that I run on startup, but it does not seem to solve the problem. #!/bin/bash sleep 10 && conky; sleep 5 && killall conky; conky; Below is the non-text part of my .conkyrc file. # Conky settings # background yes update_interval 1 cpu_avg_samples 2 net_avg_samples 2 override_utf8_locale yes double_buffer yes no_buffers yes text_buffer_size 2048 #imlib_cache_size 0 temperature_unit fahrenheit # Window specifications # own_window yes own_window_type override own_window_transparent yes own_window_hints undecorate,sticky,skip_taskbar,skip_pager,below border_inner_margin 0 border_outer_margin 0 minimum_size 200 250 maximum_width 200 alignment tr gap_x 220 gap_y 280 # Graphics settings # draw_shades no draw_outline no draw_borders no draw_graph_borders no # Text settings # use_xft yes xftfont caviar dreams:size=8 xftalpha 0.5 uppercase no temperature_unit celsius default_color FFFFFF # Lua Load # lua_load ~/.lua/scripts/clock_rings.lua lua_draw_hook_pre clock_rings

    Read the article

  • Static noise in headphones

    - by John Murdoch
    I have a Asus P6T based system. I was using the on-board sound (plugging in Logitech X-230 2.1 analog speakers in the green "front speakers" 3.5mm analog output, then plugging in my headphones in that). I was quite happy with the sound quality (didn't hear any static noise if volume was turned down to my normal listening level). Then about a week ago I started having terrible static noise from the left channel, and no normal audio on that left channel. Right channel had more static noise than usual but did have a bit of sound. I tried using the AC'97 in front of my case but that seemed to have no signal. I decided my on-board sound card has gone bad and bought an internal sound card to replace it (Startech 7.1Ch PCI). This fixed the "no sound from left channel problem", but I had much more audible static noise. I decided the card was low quality and/or it had interference from all the other things happening inside the computer case, and bought a Sweex SC016 external USB sound card. But even with that I have static noise in headphones. Positioning the USB sound card differently doesn't seem to help. Trying the other analog outputs (e.g., surround) doesn't help. The static noise in all cases is proportional to the volume. I have tried different headphones, but the situation is situation though perhaps the flavour of the static noise changes slightly. So what are my options? a) Get another, more expensive, external USB sound card hoping the quality will improve? b) Get another, more expensive, internal sound card (PCIe 1x perhaps) hoping the quality will improve? c) Get a dedicated DAC box? d) Get some Hi-Fi earphones? Suggestions? tl;dr - Two different sound cards both still have static noise in headphones.

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • SSL in IIS 7 on a subdomain in a web farm

    - by justjoshingyou
    I have been having one of the most frustrating days in my entire IT career. I am trying to install an SSL certificate on a subdomain in a web farm. http://shop.mydomain.com needs to ALWAYS be forced to https://shop.mydomain.com I have a temporary cert issued from verisign on shop.mydomain.com I have installed the cert on the server. The website for shop.mydomain.com is set as a host header in IIS with the DNS entry pointed to the same IP as mydomain.com - which is our load balancer. I actually have 2 load balancers (as needed by our ISP). One redirects all traffic on port 80 out to the different servers on port 80. The other pushes out port 443 to the servers on port 443. shop.mydomain.com is to be the only site protected by SSL at this time. When I add the binding and I navigate to https://shop.mydomain.com it pops up with a warning about the cert being invalid (assumed because this is a test cert), and then it sends the user to http. So, I checked the box "Require SSL and it redirects to http://shop.mydomain.com/default.aspx and displayes an ASP.NET 404 error message. (not the IIS 404 error) I tried removing the binding on the site to port 80 as well with no luck. I am nearly ready to crawl under my desk into the fetal position. How on earth do I make this work? I can't even get it to work on one machine, let alone in the load balanced environment.

    Read the article

  • Google chrome not accepting any security certificates

    - by Jerry
    I've recently developed a problem with Google Chrome that's really annoying. I'm using Firefox at the moment with no problems whatsoever and it's the same with IE, so it's safe to say this problem is specific to Chrome. The problem is that it's not accepting security certificates from certain sites. I suppose the best place to start would be google itself. I can't search. The google search page will load but when I type some search term into the search box and hit 'search' I get the message: "You attempted to reach www.google.com, but the server presented an invalid certificate. You cannot proceed because the website operator has requested heightened security for this domain." No matter what the search term is, this is the result. Also when I try to log in to facebook - same message. Youtube works and many other sites that I know present security certs so I'm baffled. I've searched and there are other people who have had similar issues but I can't find a solution anywhere. The most common answer I'm picking up for this is to "check your system time" but I can safely say that it's not my system time. If anyone knows what is going on, I'd very much appreciate being informed. It's not super urgent as I can use Firefox to access those places Chrome won't, but it IS super annoying because I can usually sort out issues like this in no time.

    Read the article

  • XP Client for NFS failure dialog on startup, but drive mapping works

    - by Matt Bennett
    I'm mounting an NFS share to some windows machines using the tools that come in the Services for UNIX Administration toolkit. I've set up the User Name Mapping service to use local passwd and group files. I had to manually start the User Name Mapping service, and then created an 'advanced map' from the XP machine's user to a uid that exists in on my NFS server, like so: Windows User: Matt Bennett UNIX Domain: PCNFS UNIX User: mattbennett UID: 10250 Primary: * I can map a network drive without any issues, and it correctly identifies the UID and GID to use, but when I reboot I get this message: "An error occurred while connecting to the NFS server. Make sure that the Client for NFS service has started. If the problem persists make sure Client for NFS service can communicate with User Name Mapping or PCNFS server." After dismissing the dialog, the machine finishes booting and the network drive is there in My Computer with the title "Disconnected Network Drive", but I can open it I can see the network share without a problem, and then it drops the 'disconnected' from its title. It seems like the services are starting in the wrong order or something, so the first attempt to connect fails but subsequent ones work as expected. There don't seem to be any symptoms apart from the dialog box, but obviously something's not quite right. What have I done wrong? Thanks, Matt.

    Read the article

  • Development on Windows 7; Web server on Linux - How to share Apache web root?

    - by TheKeys
    I've got a LAMP server that I want to use as a local web server. I've got a Windows 7 machine that I want to use as my development machine. The machines will be on the same LAN (or the Windows box will be VPNed into the LAN). My questions is, what is the best way of sharing the web root of the LAMP server so that I can edit the files on the remote Windows 7 machine and how do I go about configuring this on the Linux machine? (Fedora 16) I would like the solution to be as easy to use as possible with preferably no extra steps required to save/edit/upload files from my IDE on my Windows 7 machine. I'm thinking either a Samba or NFS share are the way to go but I'm concerned I'm going to run into issues with permissions and unix/windows file handling. Is one better than ther other for my use case or is there a better alternative solution? I'm currently using Windows 7 Professional which doesn't have NFS support but would upgrade to Ultimate which does have NFS support if it's the best solution.

    Read the article

  • Comparing Nginx+PHP-FPM to Apache-mod_php

    - by Rushi
    I'm running Drupal and trying to figure out the best stack to serve it. Apache + mod_php or Nginx + PHP-FPM I used ApacheBench (ab) and Siege to test both setups and I'm seeing Apache performing better. This surprises me a little bit since I've heard a lot of good things about Nginx + PHP-FPM. My current Nginx setup is something that is a bit out of the box, and the same goes for PHP-FPM What optimizations I can make to speed up the Nginx + PHP-FPM combo over Apache and mo_php ? In my tests using ab, Apache is outperforming Nginx significantly (higher requets/second and finishing tests much faster) I've googled around a bit, but since I've never using Nginx, PHP-FPM or FastCGI, I don't exactly know where to start PHP v5.2.13, Drupal v6, latest PHP-FPM and Nginx compiled from source. Apache v2.0.63 ApacheBench Nginx + PHP-FPM Server Software: nginx/0.7.67 Server Hostname: test2.com Server Port: 80 Concurrency Level: 25 ---> Time taken for tests: 158.510008 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 ---> Requests per second: 6.31 [#/sec] (mean) Time per request: 3962.750 [ms] (mean) Time per request: 158.510 [ms] (mean, across all concurrent requests) Transfer rate: 181.38 [Kbytes/sec] received ApacheBench Apache using mod_php Server Software: Apache/2.0.63 Server Hostname: test1.com Server Port: 80 Concurrency Level: 25 --> Time taken for tests: 63.556663 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 --> Requests per second: 15.73 [#/sec] (mean) Time per request: 1588.917 [ms] (mean) Time per request: 63.557 [ms] (mean, across all concurrent requests) Transfer rate: 103.94 [Kbytes/sec] received

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • Can't ping my Window 7 machine from within a Windows XP virtual machine

    - by Jonathan Conway
    I have Windows 7 installed as my primary operating system, on a laptop that's on my home network (wireless). I'm using Microsoft Virtual PC 2007 SP1 to run a virtual machine of Windows XP SP3, in which I want to access the Windows 7 instance, both to browse a shared folder and access the local Apache server. So far I can ping my Windows 7 IP address (IPv4) and access the apache server through the web browser through HTTP. However using my machine name never seems to work. Pinging it fails, and I can't access my apache server using it either. The problem seems to be something to do with my machine's name being registered under IPv6 rather than IPv4. I'm at a loss what to do. Should I try to set up IPv6 on the virtual machine? Not sure how to go about that. Or maybe I should somehow get my machine name on Windows 7 to work with IPv4? Although I think it already does, because I can ping it from a separate box (running Ubuntu), which is only registered under IPv6.

    Read the article

  • Cooling for a small server room

    - by John Zwinck
    I have a server room about 12 feet square with an unfinished ceiling (exposed ducts and wiring). It houses a few servers (about ten, 1U and 2U) and some networking gear (four 1U switches, three routers, three modems, two cable boxes). With the door closed, it runs around 80 degrees Fahrenheit with half the servers turned on. When I turned on all the servers it reached 86 before I chickened out and propped the door open. The room is adjacent to air-conditioned office space, but does not itself have dedicated air conditioning. The ventilation for this room seems to be limited to one duct coming in at ceiling level, with a powered fan to draw air in, and one duct at ceiling level to allow air to flow out (it seems like it may just go into the drop ceiling cavity in the adjacent room). The adjacent office space stays fairly cool, but I'd prefer not to leave the door propped open all the time. There is both 110v and 208v service in the room, and plenty of power available. But there are no windows, and no floor drains (in a pinch we might be able to run a condensation hose through a small hole we'd drill in the wall to a nearby sink area, but only if absolutely necessary). I've considered portable A/C units, but I'm not sure on sizing and a lot less sure how we would run the exhaust hose(s). I suppose we could point one at the existing room exhaust duct (air return), but substantially modifying the duct is probably a no-no. I've also considered installing a fan box in the door of the room, but I'm concerned that this will only drop the temperature a little. Even right now, with all the equipment on, the room is at 83 degrees with the door open. And the main building A/C turns off daily at 6 PM to conserve energy, so the adjacent room temperature rises at night. How would you cool this room? Let's say the goal is to bring the temperature with everything running from a steady state of around 90 degrees down to 75 (equivalently, to offset the heat produced by ten 1U servers).

    Read the article

  • Trying to use a SmartHost with my Exchange 2010 server

    - by Pure.Krome
    Hi folks, I'm trying to use a SmartHost with my Exchange 2010 Server. SmartHost details: Secure SMTPS: securemail.internode.on.net 465 <-- Note: that's port 465 Configure your existing SMTP settings (in your email program) to: use authentication (enter your Internode username and password, enter your username as [email protected]). enable SSL for sending email (SMTPS). So I've added the smart host details to my Org Config -> Hub Transport. I then used PowerShell to add the port:- Set-SendConnector "securemail.internode.on.net" -port 465 I've then added my username/password (as suggested above) to the SmartHost as Basic Authentication (with no TLS). Then I try sending an email and I get the following error message :- 451 4.4.0 Primary target IP address responded with: "421 4.4.2 Connection dropped due to ConnectionReset." So i'm not sure how to continue. I also tried ticking the TLS box but stll I get the same error. If i don't use SMTPS (secure SMTP, on port 465) and use basic SMTP on port 25 with no Authentication, email gets sent. Any ideas? EDIT: Btw, I can telnet to that server on port 465 from my mail server .. just to make sure i'm not getting firewall'd, etc.

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • How do I configure nVidia drivers on a Portable Ubuntu setup?

    - by Nicholas Flynt
    I've been pulling my hair out over this one for a couple of days now, google is no help. I've created a wonderful (until this issue) portable copy of Ubuntu linux that will boot on mostly anything by using a USB enclosure for my laptop's 80GB SATA drive. So far so good, it boots and runs on everything, and on non-nVidia card setups was even detecting the drivers, or letting me install the required drivers for hardware acceleration and compiz. Because you know, the wobble windows are the most awesome thing ever. Anyway, my desktop machine had an nVidia card, so I'm thinking, sure, I'll just install the nVidia drivers like before and everything will work happily. Not so-- now the desktop and any other nVidia cards work great, but it seems to have completely disabled any other graphics cards. When the kernel module detects that an nVidia card isn't present, it shoots up this nasty little dialog box giving me the option to boot into "low graphics" mode, which doesn't even allow me to use the correct screen resolution, much less see the installed graphics card and try to configure a driver for it. Is there any way to configure Ubuntu (with the dreaded nVidia kernel module) so that it can use nVidia's drivers when an nVidia card is present, and default to the normal (not low-graphics) setup in other cases, so that it has a fair chance of using what's actually present? I'm not afraid to much with config files, I just don't know the underlying system well enough to feel comfortable diving in without a push in the right direction. Thanks guys!

    Read the article

  • Regular issue with keys on temp tables

    - by Christian
    We run a large forum with lots of reads and writes, particularly to the posts and topics tables which are both innodb. Last week I started doing 12 hourly backups with innobackupex because mysqldump just takes forever (7+ million rows in posts table.) It seems that something doesn't like these backups because I have a recurring problem every other day. The symptoms; The front page of the site starts throwing errors The logs start showing errors like Error: 126 - Incorrect key file for table '/tmp/mysql/#sql_4e87_14.MYI'; try to repair it The /tmp/ dir fills up and we start getting Error: 1030 - Got error 28 from storage engine in the logs. The only way to fix is to optimize table on each of the posts and topics tables. I'm trying all I can to stop MySQL using disks for temp tables, but I'd have more problems than this if it used all my memory also. My my.cnf is here; https://gist.github.com/cbiggins/0aa26f6defb7a14541d7 The box has 32GB memory and I don't come near that usually. Currently at 15GB use. Thanks in advance. Update 1: Despite the conf looking like there is replication, there isn't. This is a stand alone instance.

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • Debugging a Drobo that chokes Windows 7x64 When Plugged In

    - by Pridkett
    I've had a love hate relationship with my Drobo for a long time. After two years of using it on a Linux box, I moved it over to a Windows 7 machine where it seemed to work just fine for a long time, but it was under very light usage. Mainly backups that never actually happened. Recently I began using it for additional backup services (through CrashPlan, which is great). This means the Drobo gets a lot more usage. Also it means that something interesting happens, the Drobo can choke my system on startup. Here's what I mean: Start computer without Drobo plugged in, CrashPlan and Drobo Dashboard services disabled: 105s Start computer with Drobo plugged in Crashplan disabled, Drobo Dashboard enabled: 250s (and 1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, CrashPlan and Drobo Dashboard disabled: 250s (1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, Crashplan and Drobo Dashboard enabled: 300s (1 cpu at 100% for a very long time, drobo churning) If I yank the USB plug on the Drobo the CPU usage goes down to nothing very quickly. The slow startup in the fourth scenario is because CrashPlan is trying desperately to load stuff up on the H: drive before it gives up, so I've disabled it for the time being. So here's my question: What the heck is going on when I plug the drobo in? I've fired up Process Explorer and see that the System process is hogging the CPU, specifically it's an ntoskrnl.exe/KdPollBreakIn thread that's going ape. Is this something that's wrong with Drobo? Windows? Any idea on how to find out? If it matters, here's tech info: Athlon 64x2 4400, 2GB RAM, Win7 Ultimate, Drobo USB (2x1TB, 2x320GB)

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >