Search Results

Search found 27912 results on 1117 pages for 'computer security'.

Page 196/1117 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Is it okay to use a SSH key with an empty passphrase?

    - by mozillalives
    When I first learned how to make ssh keys, the tutorials I read all stated that a good passphrase should be chosen. But recently, when setting up a daemon process that needs to ssh to another machine, I discovered that the only way (it seems) to have a key that I don't need to auth at every boot is to create a key with an empty passphrase. So my question is, what are the concerns with using a key with no passphrase?

    Read the article

  • SFTP: How to keep data out of the DMZ

    - by ChronoFish
    We are investigating solutions to the following problem: We have external (Internet) users who need access to sensitive information. We could offer it to them via SFTP which would offer a secure transport method. However, we don't want to maintain the data on server as it would then reside in the DMZ. Is there an SFTP server that has "copy on access" such that if the box in the DMZ were to be compromised, no actual data resided on that box? I am envisioning an SFTP Proxy or SFTP passthrough. Does such a product exist currently?

    Read the article

  • Deny directory browsing in a Proftpd / Ubuntu Installation

    - by skylarking
    I used this guide to set up a Proftpd installation an Ubuntu 8.04 server... Works well, but the generic user ( userftp ) can run ls and is able to change to any Directory and browse freely on the server ..from the root / and upwards.. I added this line to etc/shells /bin/false in hopes that that would prevent this ... I really only want the userftp account to be able to upload to the generic /home/FTP-Shared directory, and be able to do nothing else on the server. How is this accomplished ... This is a headless Ubuntu box..and I am using CLI only .. no GUI admin tools

    Read the article

  • Protect Windows VPN from Unauthorized Users

    - by kobaltz
    I have a VPN connection that I use while away from home to remote into my home network. I would use a zero config solution like Hamachi, but need access from my mobile device. Therefore, I have my Windows Home Server acting as the VPN server and will accept incoming connections. Both the username and password are strong. However, I'm worried about brute force attacks against my network. Is there something else that I should do to protect my network from having unauthorized access attempts to my network? I'm familiar with Linux's FAIL2BAN, but wasn't sure if something similar existing for Windows.

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • How can I audit a Linux filesystem for files which have been changed or added within a specific time

    - by Bcos
    We are a website design/hosting company running several sites on a Linux server using Joomla 1.5.14 and recently someone was able exploit a vulnerability in the RW Cards component to write arbitrary files/modify existing files on our filesystem enabling them to do some nasty things to our customers sites. We have removed vulnerable modules from all sites but are still seeing some problems. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this?

    Read the article

  • Encrypt shared files on AD Domain.

    - by Walter
    Can I encrypt shared files on windows server and allow only authenticated domain users have access to these files? The scenario as follows: I have a software development company, and I would like to protect my source code from being copied by my programmers. One problem is that some programmers use their own laptops to developing the company's software. In this scenario it's impossible to prevent developers from copying the source code for their laptops. In this case I thought about the following solution, but i don't know if it's possible to implement. The idea is to encrypt the source code and they are accessible (decrypted) only when developers are logged into the AD domain, ie if they are not logged into the AD domain, the source code would be encrypted be useless. Can be implemented this ? What technology should be used?

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • Window 2003 Server - Logon Failure error message in Event Viewer

    - by user45192
    Hi guys, I received alot of event logged in the event viewer with this message. I notice is always the same user id which encounters this error. The user id is use by an application to access the database. However, this account does not exits on this server. How do I trace the services/program use by this user id which causes these error messages? Reason=Unknown user name or bad password&&User Name=&&Domain=&&Logon Type=3&&Logon Process=NtLmSsp&&Authentication Package=NTLM&&Workstation Name=&&Caller User Name=-&&Caller Domain=-&&Caller Logon ID=-&&Caller Process ID=-&&Transited Services=-&&Source Network Address=-&&Source Port=-&&User=SYSTEM&&ComputerName=

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    Dear ladies and sirs. The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G

    Read the article

  • Encrypt shared files on AD Domain.

    - by Walter
    Can I encrypt shared files on windows server and allow only authenticated domain users have access to these files? The scenario as follows: I have a software development company, and I would like to protect my source code from being copied by my programmers. One problem is that some programmers use their own laptops to developing the company's software. In this scenario it's impossible to prevent developers from copying the source code for their laptops. In this case I thought about the following solution, but i don't know if it's possible to implement. The idea is to encrypt the source code and they are accessible (decrypted) only when developers are logged into the AD domain, ie if they are not logged into the AD domain, the source code would be encrypted be useless. How can be implemented this using EFS?

    Read the article

  • How to Suppress Repetition of Warnings That an Application Was Downloaded From the Internet on Mac OS X?

    - by Jonathan Leffler
    On Mac OS X, when I run Firefox (and Thunderbird, and ...) which I downloaded from Mozilla, the OS pops up a warning that the file was downloaded over the internet, giving the date on which it was downloaded. I have no problem with that warning on the first time I use a downloaded application - but the repeated warnings are a nuisance. Is there a way to suppress that dialogue box? Is there a way to avoid it appearing in the first place? (Some applications I download from a corporate intranet - those don't produce the equivalent warning; any idea what the criteria are for when the warning is generated?)

    Read the article

  • Setting permissions on user accounts

    - by Ron Porter
    We would like to lock a couple of accounts to prevent even domain admins from resetting the password without already knowing the current password. From what I can see in the permission sets, this looks possible. Anything I've found on the subject recommends against altering default permissions, but doesn't go into detail why. Assuming that domain admin retains the ability to reset passwords without knowing current passwords is it reasonable to prevent password resets on the domain admin account and maybe a couple of others? If not, why not?

    Read the article

  • Why some recovery tools are still able to find deleted files after I purge Recycle Bin, defrag the disk and zero-fill free space?

    - by Ivan
    As far as I understand, when I delete (without using Recycle Bin) a file, its record is removed from the file system table of contents (FAT/MFT/etc...) but the values of the disk sectors which were occupied by the file remain intact until these sectors are reused to write something else. When I use some sort of erased files recovery tool, it reads those sectors directly and tries to build up the original file. In this case, what I can't understand is why recovery tools are still able to find deleted files (with reduced chance of rebuilding them though) after I defragment the drive and overwrite all the free space with zeros. Can you explain this? I thought zero-overwritten deleted files can be only found by means of some special forensic lab magnetic scan hardware and those complex wiping algorithms (overwriting free space multiple times with random and non-random patterns) only make sense to prevent such a physical scan to succeed, but practically it seems that plain zero-fill is not enough to wipe all the tracks of deleted files. How can this be?

    Read the article

  • How to secure an Internet-facing Elastic Search implementation in a shared hosting environment?

    - by casperOne
    (Originally asked on StackOverflow, and recommended that I move it here) I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app. That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally. However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon. That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there. One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate). But that's not the case. That given, what is a good way to expose Elastic Search over the Internet in a secure way? Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

  • Could it be that "chkrootkit" just doesn't like .hmac, .packlist, and .relocation-tag files?

    - by Danijel
    I just cleaned up my hacked CentOS server (due to not updating since versino 5.3). But still, "chkrootkit" says this: Possible t0rn v8 \(or variation\) rootkit installed /usr/lib/.libfipscheck.so.1.1.0.hmac /usr/lib/.libgcrypt.so.11.hmac /usr/lib/.libfipscheck.so.1.hmac /lib/.libcrypto.so.0.9.8e.hmac /lib/.libssl.so.0.9.8e.hmac /lib/.libssl.so.6.hmac /lib/.libcrypto.so.6.hmac /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/auto/Text/Iconv/.packlist /usr/lib/perl5/5.8.8/i386-linux-thread-multi/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/HTML-Tree/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/Font/AFM/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/MLDBM/Sync/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/MLDBM/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/FreezeThaw/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/Apache/ASP/.packlist /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/auto/HTML-Format/.packlist /usr/lib/gtk-2.0/immodules/.relocation-tag /usr/lib/python2.4/plat-linux2/.relocation-tag /usr/lib/python2.4/distutils/.relocation-tag /usr/lib/python2.4/config/.relocation-tag Could it be that "chkrootkit" just doesn't like .hmac, .packlist, and .relocation-tag files? Are these realy still infected?

    Read the article

  • Making Puppet manifests/modules available to a wide audience

    - by Kyle Smith
    Our team rolled puppet out to our systems over the last six months. We're managing all sorts of resources, and some of them have sensitive data (database passwords for automated backups, license keys for proprietary software, etc.). Other teams want to get involved in the development of (or at least be able to see) our modules and manifests. What have other people done to continue to have secure data moving through Puppet, while sharing the modules and manifests with a larger audience?

    Read the article

  • Can't connect back to the wireless network after the password was changed

    - by 7777
    Family changed the network password and some other network settings after new computers were brought into the house because apparently they wouldn't work with what we had. Actually an off-site tech remotely changed it, and I have no idea what he did. My laptop detects the network (it shows up under available networks) but whenever I try to connect it says: Windows is unable to connect to the selected network. The network may no longer be in range. Please refresh the list of available networks, and try to connect again. I wish I could give more details, config settings, but frankly I have no idea what I'm looking for. This is XP (also, not a password issue, I know the password, it's just that I have no idea where to enter it, etc.)

    Read the article

  • How to control remote access to Sonicwall VPN beyond passwords?

    - by pghcpa
    I have a SonicWall TZ-210. I want an extremely easy way to limit external remote access to the VPN beyond just username and password, but I do not wish to buy/deploy a OTP appliance because that is overkill for my situation. I also do not want to use IPSec because my remote users are roaming. I want the user to be in physical possession of something, whether that is a pre-configured client with an encrypted key or a certificate .cer/.pfx of some sort. SonicWall used to offer "Certificate Services" for authentication, but apparently discontinued that a long time ago. So, what is everyone using in its place? Beyond the "Fortune 500" expensive solution, how do I limit access to the VPN to only those users who have possession of a certificate file or some other file or something beyond passwords? Thanks.

    Read the article

  • Is SSL to the proxy good enough?

    - by Josh Smeaton
    We are currently trying to decide on how best to do SSL traffic in our environment. We have an externally facing Apache proxy server that is responsible for directing all traffic into our environment. It is also doing the SSL work for the majority of our servers. There are one or two IIS servers in particular that are doing their own SSL, but they are also behind the proxy. I'm wondering, is SSL to the proxy good enough? It would mean that traffic within our network is identifiable, but is that such a big deal?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >