Search Results

Search found 2950 results on 118 pages for 'critical skill'.

Page 60/118 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Secure Server Distro

    - by Drama
    Hello, I have a root-server (i7/24GB/1TB) running Ubuntu 10.04 LTS as my OS. After some security audits (OpenVAS, Retina etc) I see that Ubuntu isn't the most secure system for a semi-corporate environment. Its updated from many sources, ofc from the Ubuntu security repo too. But nevertheless I could exploit my OpenSSL install with an exploit from August/September. There are some critical updates needed which Ubuntu does not provide. I was using Debian and Ubuntu for almost 5 years but now I doubt. What distro is secure and up to date from your point of view? How can I make the server more secure? Outsourcing of every software-module to a VM? I am not new to server-hardening, my packages are up to date I read Ubuntu Security Notices and I have no unneeded services installed on my server. Thanks.

    Read the article

  • X11 for apache user

    - by fuenfundachtzig
    We are using inkscape to convert SVG images uploaded to our server via a web form. For this inkscape offers a batch mode via the -z option, but this batch mode has a flaw: When inkscape is run by the apache user, it breaks saying $ inkscape -z -W drawing.svg X11 connection rejected because of wrong authentication. The application 'inkscape' lost its connection to the display localhost:11.0; most likely the X server was shut down or you killed/destroyed the application. If you do the same as a normal user you also get errors: Xlib: connection to "localhost:11.0" refused by server Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match (inkscape:24050): Gdk-CRITICAL **: gdk_display_list_devices: assertion `GDK_IS_DISPLAY (display)' failed 301.27942 But at least inkscape gives the correct answer (here the number stating the width of the image). Does somebody know how to make this also work for the apache user? Does it make sense to authorize apache to use X (if so how)? In any case it doesn't feel like the right solution...

    Read the article

  • chmod -R 777 /. - RHEL 5.5

    - by user1263746
    A shell script testing went bad and it issued chmod -R 777 /. to the system, instead of chmod -R 777 ./ and as expected it wiped the critical meta data. We have turned off the system and it will not function properly the next time it is turned on. I am told that rpm --setperms -a rpm --setugids -a should atleast fix the permission of the packages maintained by rpm. Is it worth doing? And is there any script available which will copy the permission from an identical system? To atleast get the box working. The Box is running RHEL5.5 Thanks!

    Read the article

  • Domino 8.5.3: Modify Subject of Incoming Email

    - by Void
    I am a newbie in managing and development for Domino. Recently, I have request from other teams at work to set up a filter or agent for incoming mail. This is the requirement for the request: Look for Incoming Mail addressed to #CRITICAL (mutlipurpose, internal group containing a list of engineers) For mail matching Point 1, append "For Immediate Action: " to the front of the Subject Some restrictions I have: Only the Domino server is under my charge, not to touch on network-side or other servers No 3rd party software to be installed I have gone through the configurations in the Domino server and the closest thing I have to filtering Email is the Router/SMTP Restrictions... Rules. But this is not able to fulfill Point 2 in any way. Is this even possible using just Domino server settings, or through agents?

    Read the article

  • Populating cells with data from another spreadsheet after just keying in a few letters

    - by Wendy Griffin
    I have 1 workbook with 2 spreadsheets. Spreadsheet 2 column A contains a long list of company names, Columns B - H contain critical information about the company. Spreadsheet 1 contains all of the columns as Spreadsheet 2 plus some other columns. What I'm trying to achieve is that when you start to type in the first 3 characters of a company name on Spreadsheet 1 it would then have a drop down of the companies (as listed on Spreadsheet 2) that share the first 3-5 letters and you would select one. Upon selecting a company name all of the corresponding company information would populate in the other columns on spreadsheet 1 automatically. This is to avoid copying a row from Spreadsheet 2 and pasting it in Spreadsheet 1. Any help with this would be greatly appreciated. Cheers!

    Read the article

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • Securing a local server physically

    - by Daniele
    We are an online business. We have a very powerful server with hard disk mirroring in our office that we are using for a variety of internal business-critical functions. We want to keep that machine in our office but we want to make sure it is as secure as possible (within reason). Obviously we are already backing it up everyday off-site. My question is more about not-too-expensive physical measures to protect the machine against thieves and disasters such as fire. What would you suggest?

    Read the article

  • eclipse IDE won't start in ubuntu 10.04 64bit with sun-java

    - by aeischeid
    i get this error when I try to run from CLI ** (Eclipse:2318): CRITICAL **: menu_proxy_module_load: assertion `dbusproxy != NULL' failed # A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0x00007f34511a4e74, pid=2318, tid=139863603410704 # JRE version: 6.0_18-b18 Java VM: OpenJDK 64-Bit Server VM (14.0-b16 mixed mode linux-amd64 ) Derivative: IcedTea6 1.8 Distribution: Ubuntu lucid (development branch), package 6b18-1.8-0ubuntu1 Problematic frame: C [libglib-2.0.so.0+0x41e74] g_main_context_prepare+0x164 # An error report file with more information is saved as: /home/aaron/opt/eclipse/hs_err_pid2318.log # If you would like to submit a bug report, please include instructions how to reproduce the bug and visit: https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ The crash happened outside the Java Virtual Machine in native code. See problematic frame for where to report the bug. # I have sun-java installed and set in my PATH: ~: echo $JAVA_HOME /usr/lib/jvm/java-6-sun/

    Read the article

  • Remote desktop session ends abruptly with a "protocol error"

    - by Jon
    Intermittently we get a problem where a remote desktop session will get disconnected with the error message “Because of a protocol error, this session will be disconnected. Please try connecting to the remote computer again.” We are getting this with one server only which is running Windows Server 2008, connecting with Windows 7 clients. The session itself stays running, you just get disconnected, and you can try and reconnect. Sometimes you get in for a while then it will kick you out. We are connecting from Windows 7 clients. We have tried connecting using Cord on a Mac and this works fine, so it's not like the session itself is corrupted. One problem is that there are some critical applications running under the session (I know, let's not discuss the idiocy of that), so we cannot reset the session in any way during the working day – so any diagnostics must have minimum impact. Thanks, Jon

    Read the article

  • Windows 7 Wakeup from Sleep then Shutdown

    - by Kevin Hua
    Is this caused by low battery? I have an Asus UX31A, and I left it unattended. As usual, in 5 minutes, it went to sleep mode with the lid still open. I came back a couple hours later, and I noticed the laptop was off. I was still able to turn it on by manually pressing the power button (battery was at 2% though), but I don't understand why it shut off. If it was in sleep mode and the battery was near depletion, then wouldn't that result in a "Windows has recovered from an unexpected shutdown" upon boot? Does Windows have a mechanism to wake from sleep from a super critical battery level to shut down all programs and power down the system completely? I noticed that upon booting, firefox didn't give a fit about improper shutdown. Here are my power settings btw, powercfg -h off has been ran, so hibernation is off. And here is the event log: http://www.mediafire.com/download.php?lbfhl21g0nj2adi

    Read the article

  • Load balancing with Cisco router

    - by you8301083
    I have a Cisco router with two bonded T1's which are setup as a VPN to the main office. We need more bandwidth but can't get other connections (or it's too costly), so I would like to have a dsl connection installed. This DSL connection will run over a VPN to the same main office, but it won't be bonded with the T1's - so it won't act as a single connection. Since the three circuits won't act as a single connection (basically would be two connections 2 T1's + 1 DSL) we would have to split the network in half - but I don't want to do that. Instead, would it be possible to send all HTTP/HTTPS over the DSL connection but send all mission critical data (such as voice/active directory) over the T1's? I basically want to send specific ports over DSL and everything else over the T1's without separating half of the users traffic over the DSL and the rest over the T1's.

    Read the article

  • Puppet causes endless restarts of CUPS (how does one prevent this)

    - by Purfideas
    Hi! It makes sense, and is in fact suggested on this site, to have a critical file change trigger a service restart with puppet meta-parameters (such as notify or subscribe). For example: ## file definition for printers.conf file { "/etc/cups/printers.conf": [snip], source => "puppet:///module/etc/cups/printers.conf" } ## service definition for sshd service { 'cups': ensure => running, subscribe => File['/etc/cups/printers.conf'] } But in the case of CUPS, this triggers and endless loop of restarts; the logic works like this: Change puppetmaster's version of /etc/cups/printers.conf puppetmaster pushes new version to client, triggering cups restart cupsd restart insists on putting its own time stamp at the top of printers.conf, 'Written by cupsd...' This change will be seen as out of date, so after runinterval, we return to (1). Is there a way to suppress cupsd's need to time stamp the file? Or is there a puppet trick that could help here? Thanks!

    Read the article

  • File replication among EC2 instances

    - by Peuge
    I am pretty new to AWS so please excuse my ignorance. We are wanting to have a setup whereby we have a SQL DB instance + web server instance. However we would like the Web server to sit behind an ELB thus allowing us to use Autoscaling. My question however is how to we replicate the web app across instances? Say for example we have two web servers running and we need to make a critical update to the web app, ultimately we would only want to upload to one instance and not both. Is it even best practice to store your web app on the instance or are there better ways to store and share the app between instances?

    Read the article

  • Broad Band LAN connection through existing 6 wire phone line ( no phone connected )

    - by Paul Taylor
    I have an (up to but never achieved ) 10 mb broadband signal coming into my house along the telephone line. The modem then connects the broadband signal via a LAN connection and Wi-Fi signal to my computers and iPad. My workshop desk is 54 yards (approx. 50m) downhill from the modem (part of a separate building) – too far to give a good direct signal. The inside corner of the workshop (by a window) receives a weak signal. I have a disconnected telephone cable consisting of 6 wires going from the house to the workshop. The telephone is no longer used in the workshop - we use our mobile number for business calls. A broad band signal using the cable would not have to share with a phone. What would be the most economic price/efficient way to get the broadband signal to the workshop? I am writing in hope that since the telephone didn't need the six wires, an Ethernet connection might be similar and not need all the eight wires for a LAN connection. In theory could use the telephone wire to pull an Ethernet cable trough the underground pipe but I doubt if the cable would survive the strain. The broad band connection in the workshop does not have to have network facility. My skill level is high enough to do house wiring, plumbing and gas fitting; so given any sound advice and a wiring diagram or instructions I can probably work out the rest myself.

    Read the article

  • Problems executing check_mysql plugin

    - by Brian
    I'm attempting to test out the check_mysql plugin for Nagios using the command line. It works great when I'm just checking localhost but when I try to specify a different server (using the -H argument) I keep getting the following error message: CRITICAL - Unable to connect to mysql://nagios_user@{remoteServerHostname}/ - Access denied for user 'nagios_user'@'{localServerHostName}' (using password: YES) It looks like it's trying to connect to the database using the localhost server even though I've specified a different host name. I've already set up this user on the remote database and they have correct permissions. I'm just trying to figure out why the script keeps inserting the localhost server name instead of the remote one.

    Read the article

  • Is there anyway to stop Windows 8 from restarting for an update?

    - by William Hilsum
    My Windows 8 machine is going to restart shortly... I have tried a few techniques such as shutdown /a which is not having much luck. Short of pausing the Windows Update service, is there anyway to stop this restart from happening? ---update--- Looks like the "restart" is all talk and no action! The time has gone and I am presented with this: So, it goes along with the top answer here (which I shall shortly accept) --- update again--- It did reset!!! My battery went critical and it turned itself off, I turned it back on (out of suspend) when it was plugged in, I logged in and the first thing it did was automatically restart - closing my open tabs in Chrome :( Oh well...

    Read the article

  • nagios: trouble using check_smtps command

    - by ethrbunny
    I'm trying to use this command to check on port 587 for my postfix server. Using nmap -P0 mail.server.com I see this: Starting Nmap 5.51 ( http://nmap.org ) at 2013-11-04 05:01 PST Nmap scan report for mail.server.com (xx.xx.xx.xx) Host is up (0.0016s latency). rDNS record for xx.xx.xx.xx: another.server.com Not shown: 990 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 110/tcp open pop3 111/tcp open rpcbind 143/tcp open imap 465/tcp open smtps 587/tcp open submission 993/tcp open imaps 995/tcp open pop3s 5666/tcp open nrpe So I know the relevant ports for smtps (465 or 587) are open. When I use openssl s_client -connect mail.server.com:587 -starttls smtp I get a connection with all the various SSL info. (Same for port 465). But when I try libexec/check_ssmtp -H mail.server.com -p587 I get: CRITICAL - Cannot make SSL connection. 140200102082408:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:699: What am I doing wrong?

    Read the article

  • Unable to ping remote server Nagios

    - by williamsowen
    We've recently set up Nagios on one of our Amazon EC2 instances to act as a monitoring server to our other instances. nrpe was installed on our staging server stager and appears to be working fine: monitoring_server~: /usr/lib/nagios/plugins/check_nrpe -H xx.xx.xx.xx -p 5666 NRPE v2.12 The issue is - when viewing the remote server stager within the Nagios admin screen - it appears to be 'DOWN'. The check_ping command reveals: monitoring_server~: /usr/lib/nagios/plugins/check_ping -H 'xx.xx.xx.xx' -w 5000,100% -c 5000,100% -p 1 PING CRITICAL - Packet loss = 100%|rta=5000.000000ms;5000.000000;5000.000000;0.000000 pl=100%;100;100;0 Can anyone provide some direction on how to get this working? Not sure what else to do

    Read the article

  • FoxPro 2.6 DOS on Windows 7 64-bit

    - by Rolando
    I support a company that has a very old, mission critical, FoxPro for DOS 2.6 (FPD) application. For variuos reasons the company didn't adapt/migrate their app, which, ironically, has been running even better under Windows XP (and 32-bit Win7) because the OS allowed new features like more reliable networking, distributed printing, email integration. Unfortunately for this company, most new machines now come with a 64-bit version of Windows 7, which is incompatible with their FPD app. I know this time the writing is on the wall: the only long-term solution is to migrate their app. But I wonder if anyone can suggest a temporary alternative path, which doesn't involve either: downgrade 64-bit Windows to 32-bit, or run the app on a virtualized 32-bit XP

    Read the article

  • Resilient Linux Mail Server Setup

    - by Coops
    How would people design a resilient mail server setup with Linux? On an application level what the system needs to provide is both an incoming and outgoing mail service (i.e. SMTP & IMAP), along with filtering and archive storage (the archive part isn't critical yet, so we'll look at this later probably). What is required on top of this is a resilient system, i.e. one which will handle individual server failures without interrupting service. As such I would term this a High Availability mail system. This is in contrast to a High Performance mail setup, as in our case the volume of mail being handled isn't the important factor, it's simply that it stays online. Having not approached this problem before, the first thing I thought of was a clustered file system (gfs/gluster/etc), combined with heartbeat to failover a floating IP to another box in the case of a server failure. Combined with postfix & dovecot does this sound feasible to people?

    Read the article

  • Kaspersky processing error Explorer.exe (recycle bin)

    - by aeternus828
    I get daily critical errors from Kaspersky involving Explorer.exe... The file in question is almost always in the Recycle Bin, or something on the desktop. Here is an example error detail: Event type: Processing error Application\Name: EXPLORER.EXE Application\Path: C:\WINDOWS\ Application\Process ID: 2364 Application\Options: C:\windows\Explorer.EXE Component: File Anti-Virus Result\Description: Processing error Object: C:\$Recycle.Bin\S-1-5-21-1403139956-787289773-2644151291-500\$RIKKQKS Object\Type: File Object\Path: C:\$Recycle.Bin\S-1-5-21-1403139956-787289773-2644151291-500\ Object\Name: $RIKKQKS Reason: Read error Google searches didn't offer much insight, so I thought I'd ask here if anyone has encountered a similar situation. Not sure if it's a bug, something to be concerned about, or an easy fix, etc. I usually just empty the recycle bin for a temporary fix, but would like to get to the root of the error. Thoughts?

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • Internet Explorer 8 only running as process not application

    - by Lord Peter
    Internet Explorer 8 on XP SP3 starts without browser window. Task manager doesn't show application, but iexplore.exe is listed twice in process window. Process Explorer reports "no visible windows found for this process" when I try to "bring to front" in the iexplore.exe properties dialog. Have reinstalled (twice), full scanned with MBAM/MSSE/SpyBot etc, re-registered ieproxy.dll (another Google-inspired tip!), run without addons (-extoff switch), and still same problem. Recently uninstalled VMWare Player and wondered whether problem related to VM network adapter somehow, but Firefox still works perfectly. This is one of my home machines, not critical, and it is backed up, so I will restore if I have to. But any and all suggestions will be gratefully received. It would be nice to understand what might have happened, and perhaps others may benefit from any knowledge that comes to light.

    Read the article

  • Setting up and moving a distribution group to a public folder in Exchange 2003

    - by Sevdarkseed
    I'm looking for some advice with something that seems simple but I haven't done before. Currently we use a distribution group called Orders that has an email address that is that forwarded to four people in the company, the same goes for another one, Quotes. The problem is, of course, no one knows what was answered, and all the email gets worked in to the individual user's emails, so I'm thinking that a public folder, only accessible by one department would be the answer. I'm not sure what the best way to set this up would be or how to move/convert the current distribution list over to a public folder. This is a very critical email address in the company, so I'm trying to be sure that there is zero down time at all for it. What would be the best way to go about creating a public folder and converting/forwarding/moving/(whatever) over the current email address to that folder?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >