Search Results

Search found 4561 results on 183 pages for 'production'.

Page 92/183 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Is it bad to have a very full hard drive on a high traffic database server?

    - by MikeN
    Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance. We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all? So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk? The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.

    Read the article

  • DRAC for remote OS install w/o Virtual Media

    - by The Diamond Z
    I have a few DELL servers in a remote DC and our ISP has been very kind about doing OS installs for us. However, as we move to Production and multiple DC's I'd like to be able to do the installs/re-installs internally and DRAC Enterprise w/SDRAM seems ideal. My question is, how do you get your install ISO's on to the SDRAM? Can I just copy it from a local DVD (temp USB hookup) or FTP? What's the advantage to the SDRAM over just buying a USB dongle (to leave plugged into the server) and installing a bootable install ISO? We're a virtual org generally using DSL (2mb) connections to the DC over the Internet and using 'Virtual Media' isn't viable for us.

    Read the article

  • What's the best practice for keeping track of Microsoft solution stack Hot fixes and patches?

    - by melaos
    i'm currently working on a product that is build on microsoft stacks such as sql server, entity framework, wcf, c# and biztalk server. so recently we're running into weird issues on our production servers and now we're troubleshooting this issue. we're kind of lost. so we're looking into anti virus exclusion and cumulatives updates for biztalk server. but my question now is what's the best way to keep track of all the hot fixes? do we just check for them, i.e. google it up only when we have issues? i've googled online and found that there's a microsoft baseline analyzer tool. and the other thing is microsoft blog which is updated on a weekly basis which contains all the recent hot fixes. are there a better way or best way? thanks for my ignorant question.

    Read the article

  • How to route traffic through a VPN tunnel?

    - by Gabriel
    The problem with our server is that we need to use the bug ridden and awful AT&T network client, which causes our server to bluescreen once per 24 hours. Does any one know how to (or has a good guide) quickly set up a workstation running Windows server 2008 R2 as a proxy server. So this spare workstation would run AT&T and would act as a bridge between our server and the server that can be connected to only via the AT&T VPN software. And this way our own production server would not crash so often (or not at all) and the workstation can happily crash whenever it wants to.

    Read the article

  • Permissions to run a SharePoint 2010 Application Pool?

    - by Michael Stum
    I'm currently in the process of setting up a SharePoint 2010 farm. In my Dev Environments, I have one account that is Local Admin, Farm Administrator and runs all Application Pools. For Production Environment, I would like to go with best Security Practices and run the Web Applications (At least 2: Main Portal and My Sites) with separate Domain Accounts. It's been some time that I worked with IIS, and I remember that there were issues with accessing files in c:\inetpub by non-Admin users. On the other hand, SharePoint "automagically" sets most permissions anyway. Does anyone have some experience with which permissions I need to give to the domain account at minimum in order to work?

    Read the article

  • Server 2008 unresponsive after SP2 install.

    - by Dan
    I have a dev server that has an exact image of a production web server. The prod server only has SP1 installed on it. When I first fired up the dev box, the first thing I did was install SP2, and let it be. Almost every morning when I came in, the server was unusable. It would respond to ping, but RDP and the web site running on it were down. On the screen the screen saver was bouncing around, so it wasn't hard locked. But it was unresponsive to keyboard and mouse. So now I have to hard shut it down, but when it comes back up, the only thing in the event viewer is the unexpected shutdown, nothing else. I've since taken a fresh image of my prod box and put it on the dev server, and not installed SP2, and the dev box is humming along perfectly. I should also note that this is Server2k8 Web, 64bit Has anyone else seen anything like this?

    Read the article

  • SSH reverse tunnel to monitor and manage remote devices

    - by acid_crucifix
    I have a set a distributed set of devices running Ubuntu 12.04 that I am distributing to clients. I would like to manage them remotely. They may not have fixed IPs and potentially might be behind firewalls. What I am planning to do is have the devices (permanently connected to the net) poll a request URL and based on the response open a reverse tunnel to my server, so that I can access them via that tunnel. Most of what I read about reverse tunnel over SSH is for single use cases and very little about heavy production usage. Is there some reason for this, security issues? or stability? Any help would be much obliged.

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • ESXi 5.1 - Unable to register host

    - by deanvz
    I download and successfully installed ESXi 5.1. I am however unable to get the licence key I received installed. An error occurred when assigning the specified licence key: The system Memory is not satisfied with the 32 GB of Maximum memory limit. Current with 80.00 GB of Memory. Is there now way around this? A quick google revealed that this is a global problem with no real answer or resolution. The only workaround is to remove the physical RAM chips, but as this is in going to be in production I dont want to do that as it would mean down time when I have to reinsert the memory

    Read the article

  • Running SQL 2008 on a VM

    - by chris.w.mclean
    We are pondering trying to set up a SQL 2008 instance inside a VM for a production environment. All our SQL instances use iSCSI over gigabit ethernet to talk to a NAS, as would this new instance. Any reason this is a bad idea or any considerations to make this work well? The VM would be running in Xen 5.5 or we could set it up in Hyper-V if there's a compelling case for that. And the VM's VHD would be stored on a different NAS then the SQL storage is on.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • How to split file on Windows 2003 using MS supported tool

    - by Rune
    Hi, Is it possible to split a large file into smaller files on Windows 2003 using a tool provided/supported/sanctioned by Microsoft? I see that there are a lot of freeware tools (various zip tools) for this task, but I need to move files off of a production server, thus would like to avoid tools I don't know if I can trust. I would much prefer some tool included in the Windows Server 2003 Resource Kit Tools or something along those lines. Does such a tool exist? Thank you.

    Read the article

  • Do I need a web service for watershed.ustream.tv?

    - by Corey
    I am looking to use the watershed.ustream.tv service for a one-time event. Do I really need my own web service? In the control panel it says “WARNING: the test web service will always approve all requests so do NOT use it in a production environment!”. I'm not looking for any advanced features, I'm not even using the chat functionality. All I want to do is broadcast a funeral service without any advertising. I've looked for support on the watershed site, but I can't find any. Any help or advise would be appreciated.

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • How do I make a backup of a live server?

    - by Jurily
    At my new job, I have a production server with the following qualities: Windows (XP I think), ancient hardware Absolutely vital database No backups whatsoever Everyone in the company has full admin rights, the passwords are stored in a .txt on the global share No installers, except for the OS The machine itself is sitting on a wooden shelf 5 feet above the ground against an external wall with frequent truck traffic on the other side; the shelf is already bent from the constant load Hasn't been rebooted in $DEITY knows how long, my predecessor wasn't even sure if it would survive it UPS is installed, but since everything is hooked up to it, it would last 10 minutes tops No spare parts or hardware budget How do I make a full backup with minimal impact on the server? I'm not sure how close it is to a total meltdown. For all I know, plugging in a USB stick could kill the company, and of course it will be all my fault, since "it was running fine before you touched it". The ideal solution would be a VM, so I have a test environment as well (separate of course).

    Read the article

  • Apache freezing, How to detect which virtual host is getting hit?

    - by mr-euro
    I have a production server that in the last 24 hours has been hard rebooted 4 times due to freezes. Ping is fine but all other services time-out (Apache, SSHd, etc). I have now diagnosed it to Apache running out of memory due to an exorbitant amount of child processes forking suddenly within seconds of starting Apache. Stopping Apache just after rebooting keeps the server stable again. My two questions are: Is there a way to detect which of the vhosts is being suddenly hammered without looking into each vhost's access log one by one? Is there a way to quickly enable/disable vhosts without commenting (#) them all out in httpd.conf?

    Read the article

  • What version of Windows Server 2008 to Get

    - by dragonmantank
    One of the projects I'm working on is looking like we're going to need to migrate from CentOS 5.4 over to something else (we need to run Postgresql 8.3+, and CentOS/RHEL only support 8.1), and one of the options will be Windows Server. Since 2008 R2 is out that's what I'm looking at. I'll need to run Postgres and Tomcat and don't really require anything that Windows has like IIS (if I can run Server Core, even better!). The other kicker is it will be virtualized through VMWare ESXI 4.0 so that we have three separate boxes: development, Quality, and Production servers. From a licensing standpoint though, and I good enough with just the Web Server edition? Am I right in assuming that will be three licenses? Or should I just jump up to Enterprise so that I get 4 VM licenses?

    Read the article

  • Why the different coarse threaded screws?

    - by Luke
    I'm seeing more and more of these screws (pictured below), which are almost triangular. I find I can only put them into Power Supplies and PCI(e) cards in cases, but they will break/strip away if I put them into a hard drive or a standoff for a motherboard Notice the triangular shape on it? On the Root Access chat, I started asking, but no concrete answer yet. I don't assume it's a production flaw, as I've seen hundreds and replaced them with the "proper" round screws. It is coarse-threaded, not fine-threaded (i.e. for a DVD drive or floppy drive). What are they for, and why do we need them instead of the regular round ones?

    Read the article

  • How determine keyboard variation when manufacturer changes it

    - by Maksee
    When I decided to purchase Toshiba Z830, I specially noticed at photos that the keyboard was good for me (wide Enter, Left Shift, Backspace), you can query it at images.google.com, on most photos they're all wide. When I finally bought it (Z830-A2S), the keyboard was different, the Enter is narrow and the left Shift is "split" into Shift and backslash keys (probably 5% of photos at images.google.com). Is it normal for manufacturers to change this during the production cycle or this can be variations from different contractors? But the main point, is it possible to determine this from the full model name or somewhere else without visiting a store?

    Read the article

  • Why can't I defragment my SQL 2008 .mdf file?

    - by LesterDove
    I am defragmenting a badly (95%) fragmented drive upon which large (35 gig) SQL Server 2008 .mdf files live. After defragmenting and viewing the exception report, I see that the production .mdf file that I'm most interested in could not be defragmented. I initially figured it was because MSSQL had an exclusive lock on the file, so I detached it and tried again. No luck - this particular .mdf file could not be defragmented. What am I missing? Most online references suggest that I should be able to file defrag an .mdf A note: yes, I'm talking about file defragmentation, not index defrag, which is already being done routinely, and which I'll re-run after this. Thanks! What am I missing?

    Read the article

  • Only ONE Outlook 2010 installation "Cannot connect to Exchange server" when setting up new profile.

    - by Johnny PDEX
    Exchange 2010, one-server installation (small production, I know not best practice) OWA Connectivity has been confirmed, Autodiscover is configured and working properly for EVERY other installation. Other user accounts tested on problem Outlook, none can connect. Windows Firewall is pre-configured by Group Policy, only modifications being related to remote management. Firewall has also been disabled during diagnostic period. Network discovery and file sharing is enabled on workstation as well. Windows 7 Professional, latest updates installed. Driving me nuts. Help, serverfault?

    Read the article

  • Mongo Scripting the shell

    - by cKendrick
    On my production stack, I have a front-end server and a Mongo server. I would like to be able to set a cron job on the front-end server to create some logs daily. I wrote a script that does this: ./mongo server:27017/dbname --quiet my_commands.js If I run it from the Mongo server as above, it works fine. However, I would like to be able to run it from the front-end server. When I try to do that, I get: -bash: mongo: command not found Since mongo is not installed on the front end server, it gives me that error. Is it possible to somehow bind mongo to my mongo on the Mongo server?

    Read the article

  • Windows 2008 Enterprise License can't activate Standard Edition

    - by starchx
    We have downloaded and installed the Windows 2008 Standard R2 edition months ago. Now the server is in production. We signed up the Microsoft Partnership Action Pack subscription last week and get a license for Windows 2008 Enterprise edtion. I am trying to activate the standard edition with Enterprise key as advsied here:http://serverfault.com/questions/318968/upgrade-domain-controller-sku-from-server-2008-r2-standard-to-enterprise , but failed. Is it because windows 2008 we have is different? (downloaded from MS site with eval license). Thanks. tim

    Read the article

  • Deactivate SYN flooding mechanism

    - by mlaug
    I am running a server that is running a service on port 59380. There are more than 1000 machines out there connecting to that service. Once I need to restart the service all those machines are connecting at the same time. That made some trouble as I have seen that log entry in kern.log TCP: Possible SYN flooding on port 59380. *Sending cookies*. Check SNMP counters. So I changed sysctl net.ipv4.tcp_syncookies to 0 because the endpoints to not handle tcp syn cookies correctly. Finally I restarted my network to get the changes in production Next time I had to restart the service, the following message was logged TCP: Possible SYN flooding on port 59380. *Dropping request*. Check SNMP counters. How can I prevent the system for doing such actions? All necessary counter measures are done by iptables...

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >