Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 429/1953 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Can Resource Governor for SQL Server 2008 be scripted?

    - by blueberryfields
    I'm looking for a method to, in real-time, automatically, adjust Resource Governor settings. Here's an example: Imagine that I have 10 applications, each hitting a different database on the same database machine. For normal operations, they do not hit the database very hard, so I might want each one to have 10% CPU power reserved. Occasionally, though, one or two of them might spike, and run an operation which could really use the extra power to run faster. I'd like to be able to adjust to compensate (say, reducing the non-spiking apps to 3%, and splitting the difference between the spiking apps). This is a kind of poor man's method of trying to dynamically adjust resource allocation and priorities. Scripts (or something script-like) is preferred, since the requirement is for meta-level adjustments to be possible in real-time, also.

    Read the article

  • Can't deploy rails 4 app on Bluehost with Passenger 4 and nginx

    - by user2205763
    I am at Bluehost (dedicated server) trying to run a rails 4 app. I asked to have my server re-imaged, specifying that I do not want rails, ruby, or passenger install automatically as I wanted to install the latest versions myself using a version manager (Bluehost by default offers rails 2.3, ruby 1.8, and passenger 3, which won't work with my app). I installed ruby 1.9.3p327, rails 4.0.0, and passenger 4.0.5. I can verify this by typing, "ruby -v", "rails -v", and "passenger -v" (also "gem -v"). I made sure to install these not as root, so that I don't get a 403 forbidden error when trying to deploy the app. I installed passenger by typing "gem install passenger", and then installed the nginx passenger module (into "/nginx") with "passenger-install-nginx-module". I am trying to run my rails app on a subdomain, http://development.thegraduate.hk (I am using the subdomain to show my client progress on the website). In bluehost I created that subdomain, and had it point to "public_html/thegraduate". I then created a symlink from "rails_apps/thegraduate/public" to "public_html/thegraduate" and verified that the symlink exists. The problem is: when I go to http://development.thegraduate.hk, I get a directory listing. There is nothing resembling a rails app. I have not added a .htaccess file to /rails_apps/thegraduate/public, as that was never specified in the installation of passenger. It was meant to be 'install and go'. When I type "passenger-memory-status", I get 3 things: - Apache processes (7) - Nginx processes (0) - Passenger processes (0) So it appears that nginx and passenger are not running, and I can't figure out how to get it to run (I'm not looking to have it run as a standalone server). Here is my nginx.conf file (/nginx/conf/nginx.conf): #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/thegrad4/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/passenger-4.0.5; passenger_ruby /home/thegrad4/.rbenv/versions/1.9.3-p327/bin/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name development.thegraduate.hk; root ~/rails_apps/thegraduate/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I don't get any errors, just the directory listing. I've tried to be as detailed as possible. Any help on this issue would be greatly appreciated as I've been stumped for the past 3 days. Scouring the web has not helped as my issue seems to be specific to me. Thanks so much. If there are any potential details I forgot to specify, just ask. ** ADDITIONAL INFORMATION ** Going to development.thegraduate.hk/public/ will correctly display the index.html page in /rails_apps/thegraduate/public. However, changing root in the routes.rb file to "root = 'home#index'" does nothing.

    Read the article

  • Super-silent (mid tower) case and fan combo

    - by Dennis G.
    I want to build a HTPC for music/video/blu-ray playback (no gaming). I don't need an expensive HTPC case but just want to go with a standard medium tower case. However, I want it to be super silent so it doesn't make any annoying fan/disk noises when I watch movies. Ideally, it shouldn't make any noticeable noise at all. I understand that choosing a board, CPU and graphic card that run cool and don't consume a lot of power is important for designing a quiet machine, and I think I got that covered. However, there are so many choices in regards to cases, fans and power supplies that it's hard to get started. What are your recommendations for a case/fan (cpu+case)/power supply combination that run absolutely silent and can cool a standard Intel system with a low-power (possibly passively cooled) graphic card? I'm usually a fan of Antec cases, would an Antec Mini P180 be a good starting point? If so, which case fans, CPU fan and power supply would you recommend?

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • How do I choose which Ethernet Adapter to bridge in VMPlayer

    - by Catherine MacInnes
    I am running vmplayer 3.1.0 on Ubuntu. The host machine has four ethernet adapters that are configured to run on four different subnets. I need to run four VMs each with a single ethernet adapter bridged onto a specific one of the physical ethernet adapters. Does anyone know how to do this? Am I simply exceeding the capabilities of vmplayer and have to go to one of the other vmware products, if so, which one. Note that I have no need to create additional VMs, these are VMs that are being given to me by companies that want us to develop software for their products.

    Read the article

  • Windows screen shots via command-line SSH session

    - by Geoff Fritz
    I've browsed the handful of "screen capture" queries here, but I was unable to find anything which addressed my specific need. I'm looking for a command-line tool that I can run via remote SSH connection (by way of the cygwin sshd daemon). There are several to choose from, but the few I've tried (ImageMagick, nircmd, and MiniCap) all result in a blank screen. I assume that this is due to the remotely logged in user not having a proper graphical console session running. The goal here is automate screen capture and retrieval of the main system console (what one would see if they were looking at the physical monitor) through the use of ssh script from a Unix host: ssh user@windowshost "screencap --output /tmp/console.jpg" scp user@windowshost:/tmp/console.jpg /some/destdir Note that these must be done on demand, so polling a remote directory that has snapshots dumped periodically will not work. Bonus points for programs that are open source and have a portable install (so I don't need to RDP/VNC into the machine to run a graphical installer).

    Read the article

  • qtcreator keyboard problem: azerty/qwerty

    - by Allen
    I picked up the qt SDK and I've been trying it out. I'm working in English but I have a French AZERTY keyboard. When I run Qt Designer all is fine. When I run Qt Creator or Qt Designer from within Qt Creator things work as though I had a QWERTY keyboard, so I have to hit the "q" key to get an "a", etc. Qt Creator is version 1.3.1 and it is using Qt 4.6.2. My PC is running XP. What is going on and what is to be done? Any ideas? Editing a program in this setup is, of course, impossible! Luckily I discovered I can use an external editor, so I can use xemacs, which would be my first choice anyway, but this keyboard problem is a real nuisance! Thanks.

    Read the article

  • Does Ubuntu 12.04.1 come with everything I need for using virtual servers and are the tools efficient?

    - by orokusaki
    I noticed that Ubuntu 12.04.1 comes with Xen, OpenStack, KVM and other virtualization-related tools. I have used VMWare in the past. If I was to use Xen for visualization, would I see considerable performance lost, since Xen is run on the host OS? Is it even run on the host OS, or is it like VMWare where it's installed below any Linux OS on the machine (embedded, I guess is the word)? Do you have any recommendations on what sort of set up to use with these built-in tools? I have 2 physical servers, side-by-side. Each will need a VM used for Postgres and a VM used as an app server. One will be a failover for the other.

    Read the article

  • webserver running as nobody cannot resolve domain names

    - by jalal
    if i try to run the following: <?php echo file_get_contents("http://www.yahoo.com/index.html"); ?> through the web server I get a an "php_network_getaddresses: getaddrinfo" error. If I run the same file from the shell with: php test.php then I get the expected file output. This indicates to me that the 'nobody' user, which the webserver runs as, is not able to resolve the domain name, but the shell user can. Any ideas on how to fix this? Further info: CentOS 6, cPanel install, Apache, PHP running as dso. BTW, I've tried disabling the firewall to no effect.

    Read the article

  • Is there a way to prevent output from backgrounded tasks from covering the command line in a shell?

    - by Chris Pick
    I would like to be able to run task(s) in the background of a shell and not have their output to stdout or stderr cover the command line at the bottom. Frequently I need to run other commands to interact with the background processes and would like to do so from the same shell without having to open up another terminal or using multiplexer to split the terminal like screen. Ideally there would be some setting that I just don't know about (I commonly use bash or ksh), but a new or different shell or a script would be fine by me. I'm open to any suggestions and appreciate any help, thanks.

    Read the article

  • Problems getting Squirrelmail and passenger working on apache

    - by Kenneth
    I'm trying to have a setup where I want to run a squirrelmail and Passenger on the same apache server, having a url point to squirrelmail and everything else handled by passenger. I've gotten so far that both squirrelmail and passenger will run fine by themselves but when passenger is running it handles all urls. So far I've tried using Alias and Redirect to point a webmail/ url to squirrelmails directory but that does not work. Here is my httpd.conf file: <VirtualHost *:80> ServerName not.my.real.server.name DocumentRoot /var/www/sinatra/public # Does not work: #Redirect webmail/ /usr/share/squirrelmail/ #<Directory /usr/share/squirrelmail> # Require all granted #</Directory> <Directory /var/www/sinatra/public> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • PHP: Symlink in public_html cannot be accessed through browser

    - by Rachel
    I have tester.php file which I want to run on the browser and I have created symlink to it in my public_html folder, but still when I try to run it, its not working and gives me following error message. Access forbidden! You don't have permission to access the requested object. It is either read-protected or not readable by the server. If you think this is a server error, please contact the webmaster. Error 403 web.upc03.dev.com Sun Apr 4 22:41:23 2010 Apache I am not sure as to why am I getting this error message, I have check all file permissions settings and it seems to be fine. My File permissions settings are: lrwxrwxrwx for tester.php Is there something that should be done other way or is this not the proper approach ?

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • raspberry pi for web programming/development

    - by Mark Dee
    I'm into web development and my machine (AMD Phenom II, 8G RAM) is running Ubuntu 13.04. I love my current setup but I kinda miss some of Windows software like MSOffice or Adobe suites, and running on Virtualbox doesn't feel as snappy for me.... So I'm thinking of buying a new cheap machine where I would install Linux and do my development work there and have my current machine run Windows. I just found this thing called Raspberry pi which is really cheap and requires 12v only (I think) which makes it good for downloading stuff overnight. So, does it make sense to buy Raspberry pi, make it my primary dev machine, Windows being the secondary (for Adobe and browser testing of course)? Basically, I want to know if Raspberry pi meets the following requirements: It should run ArchLinux Sublime Text 3 python ruby nginx nodejs Deluge or Transmission (well, maybe just those, no need for videos and music players)

    Read the article

  • How can I get past a BSOD 07b when booting from a VHD?

    - by Dan
    I thought I'd be clever and use disk2vhd at the end of my contract to backup my machine so I could easily restore it when I started my new contract, no matter what the buffoons had done with it in the meantime. I didn't count on them losing it. I'm trying to boot this new machine from the VHD. I get the windows logo and then a 07B bluescreen error, something to do with the disk, and it won't boot even in safe mode. The cure apparently is to run sysprep but I can't run that unless I can boot into the VHD so... I can mount the disk, is there a way to modify it

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • "vagrant up" fails with "NS_ERROR_CALL_FAILED" error [on hold]

    - by TahitiPetey
    I am following the basic "Getting Started" guide: http://docs.vagrantup.com/v2/getting-started/index.html I ran vagrant init <etc> followed by vagrant up, but it fails with "NS_ERROR_CALL_FAILED" error. Then by enabling debug logging with VAGRANT_LOG=debug vagrant up, I get the following error output: ERROR vagrant: /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/driver/base.rb:316:in `execute' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/driver/version_4_2.rb:165:in `import' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/import.rb:15:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/handle_box_url.rb:72:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in `block in run' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:in `busy' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in `run' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/call.rb:51:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/warden.rb:34:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/builder.rb:116:in `call' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in `block in run' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/util/busy.rb:19:in `busy' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/action/runner.rb:61:in `run' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/machine.rb:147:in `action' /Applications/Vagrant/embedded/gems/gems/vagrant-1.2.2/lib/vagrant/batch_action.rb:63:in `block (2 levels) in run' INFO interface: error: There was an error while executing `VBoxManage`, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["import", "/Users/me/.vagrant.d/boxes/precise32/virtualbox/box.ovf"] Stderr: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Interpreting /Users/me/.vagrant.d/boxes/precise32/virtualbox/box.ovf... OK. 0%... Progress object failure: NS_ERROR_CALL_FAILED My system setup info: Vagrant 1.2.2 VirtualBox 4.2.14 (Also tried 4.2.10, same error) Mac OSX 10.8.3

    Read the article

  • How can I configure Firefox to assume I have less memory?

    - by WoLpH
    Firefox has a few different settings that automatically get tuned based on the system ram. This is all great if you're running nothing besides Firefox, but when you're running half a dozen apps at the same time and they all assume that they can take a decent chunk of mem it just kills the box. Example settings: http://kb.mozillazine.org/Browser.sessionhistory.max_total_viewers http://kb.mozillazine.org/Browser.cache.memory.capacity How can I make Firefox automatically configure all these settings with the assumption that I only have 512MB of memory instead of 4GB (or whatever number, but you get the idea). I am running Ubuntu 12.04 with Firefox 14 Current workarounds: Running a Windows XP virtual machine with 512MB of ram. It actually runs smooth and takes less memory (including Windows) to run than having Firefox (or Chrome for that matter) run standalone. Install the 32 bit version of Firefox By installing the 32 bit version of firefox (apt-get install firefox:i386) the base memory usage is only about 50% of what it is with the 64 bit.

    Read the article

  • Start screen with bash command

    - by Jeje
    I need to start screen with some bash command to execute. Trying screen -S test -d -m bash -c './test.php' but have no result, screen didn't apear. Even more, let's that i need to start something like that vlc -I ncurses --http-reconnect http://ip/ --sout '#duplicate{dst=std{access=http{user=,pwd=},mux=ts,dst=:51001}}' --ttl=255 --loop --repeat

    Read the article

  • Flash player, HD videos and games are choppy

    - by Aimad Majdou
    I have a problem with flash player. HD videos from Youtube or Vimeo and flash games do not play smoothly. I'm using Flash player 11, Windows 7 Sp1, and my graphic card is Intel GMA 4500. Device Manager shows me that all drivers are installed on my computer, so i don't have any problems with drivers. When I run Google chrome, Resource Monitor shows me 15% ~ 40 % of CPU Usage and 40% used Physical Memory, but when I watch a video on Youtube or play a Flash game, the Resource Monitor shows 70% - 90% CPU Usage. Also, when I run some HD Video (Frame width : 1920, Frame height : 1080) on my computer, Device Manager shows me 80% ~ 100% of CPU Usage. before I Reinstall Windows 7, HD Videos and flash games were play smoothly I hadn't any problem with them !! I hope all these informations are enough to answer my question.

    Read the article

  • Can 'screen' grab an existing process and tie itself to it?

    - by warren
    Scenario: Started a process that's going to take "a while" to complete outside of screen. Need to leave the terminal / netowrk hiccups Process lost Would be nice if: Started a process outside of screen Realize error Run screen <magic-goes-here> and it grabs the active process to itself From the man pages and --help info, I don't see a way this can be done. Is this possible directly with screen? If not, is it possible to change the owning shell of a process, so that the bash (or other shell of your choosing) instance inside screen can have a command run which will change the parent shell of the initial process to itself from the originator?

    Read the article

  • Linux or Windows for a server?

    - by Matt
    I'm a Linux guy when it comes to (web) servers for the following reasons Legally free Fast software updates (Unless you're running Cent OS :) Powerful CLI management of services Easy to secure (in terms of users and groups) Web server software is, well, built for Linux... Apache, PHP, Python, etc, are Linux programs that get ported to Windows - I'm 90% sure of this Unless the web server needed to run ASP, I wouldn't use Windows. My boss' IT friend is a Windows guy, though. He recently got a server setup in the office to run Microsoft Exchange and some other shit. What I'm asking is, if he wanted to start running websites on this thing, what would be good reasoning to convince him otherwise? He's not very bright in terms of IT and the IT friend is all Windows. So it's two against one here... What would you say to running a Windows web server?

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >