Search Results

Search found 11938 results on 478 pages for 'secure boot'.

Page 404/478 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Different approaches to share files over local network

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • Multiple static WAN IP addresses to single LAN subnet

    - by Jessy Houle
    Below is my home network topology. I currently have 5 static IP addresses, 3 of which are in use by 3 routers. These routers in-turn subnet internal networks and port forward. I use my SSL VPN appliance to remote home from work or on the road. At this point I can remotely administer my Windows Server. I know the network is setup wrong, I was matching existing hardware the best I knew how. http://storage.jessyhoule.com.s3.amazonaws.com/network_topology.jpg Ok this said, here is the problem... One of my websites on my Windows Server now needs to be secure (SSL using port 443). However, I'm already port forwarding port 443 to my VPN appliance. Furthermore, if I'm going to have to reconfigure the network, I would really like to be able to use the SSL VPN to remotely administer all machines. I mentioned this to a friend of mine, who said that what I was looking for was a firewall. Explaining that a firewall would take in multiple static (WAN) IP addresses, and still allow all internal devices to be on the same network. So, basically, I could supply my SSL VPN appliance it's very own static (WAN) IP address routing, and yet have it on the same internal network (192.168.1.x) as all my other devices. The first question is... Does this sound right? Secondly, would you suggest anything different? And, finally, what is the cheapest way to do this? I am started down the road of downloading/installing untangle and smoothwall to see if they will do the job, hoping they take multiple static (WAN) IP addresses. Thank you in advance for your answers. -Jessy Houle

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • PostgreSQL user authentication against PAM

    - by elmuerte
    I am trying to set up authentication via PAM for PostgreSQL 9.3. I already managed to get this working on an Ubuntu 12.04 server, but I am unable to get this working on a Centos-6 install. The relevant pg_hba.conf line: host all all 0.0.0.0/0 pam pamservice=postgresql93 The pam.d/postgressql93 is the default config shipped with the official postgresql 9.3 package: #%PAM-1.0 auth include password-auth account include password-auth When a user tries to authenticate the following is reported in secure log: hostname unix_chkpwd[31807]: check pass; user unknown hostname unix_chkpwd[31808]: check pass; user unknown hostname unix_chkpwd[31808]: password check failed for user (myuser) hostname postgres 10.1.0.1(61459) authentication: pam_unix(postgresql93:auth): authentication failure; logname= uid=26 euid=26 tty= ruser= rhost= user=myuser The relevant content of password-auth config is: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so The problem is with the pam_unix.so. It is unable to validate the password, and unable to retrieve the user info (when I remove the auth entry of pam_unix.so). The Centos-6 install is only 5 days old, so it does not have a lot of baggage. The unix_chkpwd is suid and has execute rights for everybody, so it should be able to check the shadow file (which has no privileges at all?).

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • Client certificate based encryption

    - by Timo Willemsen
    I have a question about security of a file on a webserver. I have a file on my webserver which is used by my webapplication. It's a bitcoin wallet. Essentially it's a file with a private key in it used to decrypt messages. Now, my webapplication uses the file, because it's used to recieve transactions made trough the bitcoin network. I was looking into ways to secure it. Obviously if someone has root access to the server, he can do the same as my application. However, I need to find a way to encrypt it. I was thinking of something like this, but I have no clue if this is actually going to work: Client logs in with some sort of client certificate. Webapplication creates a wallet file. Webapplication encrypts file with client certificate. If the application wants to access the file, it has to use the client certificate. So basically, if someone gets root access to the site, they cannot access the wallet. Is this possible and does anyone know about an implementation of this? Are there any problems with this? And how safe would this be?

    Read the article

  • How Can I Make Apache Stop Serving ALL Unknown File Types (like .php~)?

    - by user223304
    I am coming from IIS and moving to Apache and recently found out that Apache by default serves up files of an unknown file extension as PURE TEXT. This can be an issue if a user uses certain programs that back up .php files as .php~. Then the .php~ file becomes completely readable by simply navigating to it in a browser. To make matters worse these .php~ files are often considered 'hidden' in the linux environment from the user so some may not even know they exist. Bots have been created around this fact that scour the internet looking for popular file name backups and extracting potentially secure info from them. I already know how to stop serving up .php~ files or any specific file extensions. I also know not to use any editors that would save backup files like this. My question is, how can I stop this default Apache behavior of serving up ANY non-MIME file type at all? I just don't like the this behavior and would like to stop it. I don't want it serving up .aspx~, .html~, .bob, .carl, no extension or anything else that is not a real MIME type. I know that I can probably go and use a directive to first Deny access to all file types. Then add the ones I want to serve out one by one. But I'm wondering if there's an easier/quicker way. Thanks for any help.

    Read the article

  • Configure tomcat behind loadbalancer to respond on HTTP and HTTPS

    - by user253530
    I have 2 tomcat machines behind a load balancer on Amazon EC2. Until now The load balancer was configured to respond only on https. So in order to access our services you would go to https://url. Tomcat was configured to listen on 8080 but the connector had additional params that would tell tomcat that it is behind a proxy and that it should respond on HTTPS 443. The connector looks like this: <Connector scheme="https" secure="true" proxyPort="443" proxyHost="my.domain.name" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> What i would like to do is to open port 80 on the load balancer and basically allow traffic on HTTP and HTTPS. I've configured the load balancer to redirect all HTTP traffic to the tomcat machines on port 8088. I was thinking that i could define a new connector so that all HTTPS traffic goes to 8080 and HTTP to 8088. Unfortunately i did not succeed. Here is my connector <Connector port="8088" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> Am I missing something? Thanks

    Read the article

  • How private is the Opera Turbo feature of Opera?

    - by Marcus V
    If I'm using Opera with the Opera Turbo feature turned on (always, not set to "automaticly"). Can anyone see what sites I'm visiting (except Opera of course ...)? Opera Turbo uses a proxy server, so it should be that way, but as a not very technical person I'm not sure. Why do I want this? Well: nowadays, at least in my country, more and more (legal) open Wi-Fi connections are available. In those environments I like to have more privacy protections. I don't mind if they can see my IP address, but I just want to hide as much as I can of what I am doing. BTW: I don't care that they can see the data transferred; it doesn't have to be that secret. I only want to hide the requested Internet site links. BTW: I know that Opera Turbo only works with non-secure websites (HTTP), but that's fine for me. I only want it to work with these sites. BTW: I'm not need this for illegal purposes; I only want this for privacy reasons.

    Read the article

  • lighttpd with multiple IPs, each with a UCC certificate and many hostnames

    - by Dave
    I'd like to get lighttpd working with UCC certificates, but I can't seem to figure out the correct syntax. Essentially, for each IP address, I have one UCC certificate and a bunch of hostnames. $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "mywebsite.com" { server.document-root = /var/www/mywebsite.com/htdocs" } The above code works fine for one hostname, but as soon as I try to set up another hostname (note the same SSL cert): $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "anotherwebsite.com" { server.document-root = /var/www/anotherwebsite.com/htdocs" } ...I get this error: Duplicate config variable in conditional 6 global/SERVERsocket==10.0.0.1:443: ssl.engine Is there any way I can put a conditional so that only if ssl.engine is not already enabled, enable it? Or do I have to put all my $HTTP["host"]s inside the same $SERVER["socket"] (which will make config file management more difficult for me) or is there some entirely different way to do it? This has to be repeated for multiple IPs too (so I'll have a bunch of SERVER["socket"] == 10.0.0.2:443" etc), each with one UCC cert and many hostnames. Am I going about this the wrong way entirely? My goal is to conserve IP addresses when I have many websites that are related and can share an SSL certificate, but still need their own SSL-accessible version from the appropriate hostname (instead of a single secure.mywebsite.com).

    Read the article

  • Finding custom printer-specific options on *nix

    - by enbuyukfener
    Hey all, I have a printer set up using CUPS (FujiXerox Document Center 1100), it is named DC1100. There are capabilities of the printer that are not shows as options of the printer as listed by: lpoptions -l -d DC1100 The output is below: PrintoutMode/Printout Mode: Draft *Normal High Photo InputSlot/Media Source: Upper Lower MultiPurpose LargeCapacity Manual *Standard PageSize/Page Size: *Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement PageRegion/PageRegion: Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement STP_Brightness/Brightness: 0.00 0.02 0.04 [snip] 2.00 STP_Contrast/Contrast: 0.00 0.05 0.10 0.15 [snip] 4.00 STP_ColorCorrection/Color Correction: Accurate Bright Density [snip] Uncorrected STP_DitherAlgorithm/Dither Algorithm: Adaptive EvenTone Fast [snip] VeryFast STP_EnableDensity/Density Enable: *Disabled Enabled STP_Density/Density Value: 0.1 0.2 0.3 0.4 [snip] 8.0 STP_EnableGamma/Composite Gamma Enable: *Disabled Enabled STP_Gamma/Composite Gamma Value: 0.10 0.15 0.20 0.25 [snip] 4.00 STP_LinearContrast/Linear Contrast Adjustment: *False True STP_Duplex/Double-Sided Printing: DuplexNoTumble DuplexTumble *None Resolution/Rendering Resolution: *FromPrintoutMode 150x150dpi 300x300dpi 600x600dpi OutputType/Output Type: *FromPrintoutMode BlackAndWhite Grayscale STP_ImageType/Image Type: *FromPrintoutMode Photo Graphics LineArt None Text TextGraphics STP_Resolution/Resolution: *FromPrintoutMode 150dpi 300dpi 600dpi I am particularly looking for options for: "secure print" (possibly by setting a mode and setting a username) stapling hole punching Perhaps I need a vendor specific driver/PPD file? If so, any pointers as I have no idea where to look for one. I haven't been able to find one on the official site or on sites such as http://www.openprinting.org

    Read the article

  • Cannot connect to remote mail server for sending emails in ASP.NET

    - by Dave
    I want to migrate a web application from a Windows Server 2003 to a Windows Server 2008 R2. All works fine except sending emails from the application. If I configure the application to use the smtp server on "localhost" it works, but changing it to the "real" host name (e.g. mail.example.org) no mail is sent. The error message says, that the remote server needs a secure connection or smtp authentication. But since it works when using "localhost" instead of the host name I doubt that this is the problem. Also it's unlikely a problem with the mail server, I also tried it with another one. So for me it seems like the firewall is blocking the outgoing connection to the mail server. I tried to open port 25, but it still did not work. Maybe I just did it the wrong way. Update: For clarifying my setup: I have a Windows Server 2008 R2 with hMailServer installed (set up for some of the hosted domains) For the website I'm talking about I need to use an external mail server (totally different hosting provider) Apparently I was a bit off the track. It seems like it works when using connecting to the local mail server either with the host name "localhost" or "mail.somedomain.com" (while somedomain.com is set up in my mail server). But when using the host name of the external mail server ("mail.externaldomain.com") it seems like it tries to connect to the local server again, although this domain is not set up in the mail server. Thanks to Evan Anderson for the tip to use telnet - why I have not thought of it myself?... :-) Note, the website www.externaldomain.com is hosted on my server but the DNS entries are maintained by the other hosting provider. "externaldomain.com" is the only entry which points to my server all other records (MX, subdomains) are pointing to the other server. So I think the question is now, how do i bring my server to connect to the external mailserver. Do I have to configure this in my mail server or is it a windows server thing?

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • How do I set up a shared directory on Linux?

    - by JR Lawhorne
    I have a linux server I want to use to share files between users in my company. Users will access the machine with sftp or secure shell. Here is what I have: cd /home ls -l drwxrwsr-x 5 userA staff 4096 Jul 22 15:00 shared (other listings omitted) I want all users in the staff group to be able to create, modify, delete any file and/or directory in the shared folder. I don't want anyone else to have access to the folder at all. I have: Added the users to the staff group by modifying /etc/group and running grpconv to update /etc/gshadow Run chown -R userA.staff /home/shared Run chmod -R 2775 /home/shared Now, users in the staff group can create new files but they aren't allowed to open the existing files in the directory for edit. I suspect this is due to the primary group id associated with each user which is still set to be the group created when the user was created. So, the PGID of user 'userA' is 'userA'. I'd rather not change the primary group of the users to 'staff' if I can help it but if it is the easiest option, I would consider it. And, a variation on a theme, I'd like to do this same thing with another directory but also allow the apache user to read files in the directory and serve them. What's the best way to set this up?

    Read the article

  • su not giving proper message for restricted LDAP groups

    - by user1743881
    I have configured PAM authentication on Linux box to restrict particular group only to login. I have enabled pam and ldap through authconfig and modified access.conf like below, [root@test root]# tail -1 /etc/security/access.conf - : ALL EXCEPT root test-auth : ALL Also modified sudoers file, to get su for this group <code> [root@test ~]# tail -1 /etc/sudoers %test-auth ALL=/bin/su</code> Now, only this ldap group members can login to system. However when from any of this authorized user, I tried for su, it asks for password and then though I enter correct password it gives message like Incorrect password and login failed. /var/log/secure shows that user is not having permission to get the access, but then it should print message like Access denied.The way it prints for console login. My functionality is working but its no giving proper messages. Could anyone please help on this. My /etc/pam.d/su file, [root@test root]# cat /etc/pam.d/su #%PAM-1.0 auth sufficient pam_rootok.so # Uncomment the following line to implicitly trust users in the "wheel" group. #auth sufficient pam_wheel.so trust use_uid # Uncomment the following line to require a user to be in the "wheel" group. #auth required pam_wheel.so use_uid auth include system-auth account sufficient pam_succeed_if.so uid = 0 use_uid quiet account include system-auth password include system-auth session include system-auth session optional pam_xauth.so

    Read the article

  • Hosting a server for websites, ftp and random use at home?

    - by Zolomon
    I'm wondering what's the best option for me if I want to move all my hosted websites (from a hosting company) to a server at my own home? Basically, the needs I have are: be able to host websites using PHP/ASP.NET (haven't really decided yet - both would be preferred!) enable FTP so I can create accounts for my family members to access the server for file handling SSH SSL - for secure connections (this is something you have to buy/apply for per domain, not sure if there are any server side settings that have to be made) be able to stream video remote desktop host home-brew applications that can run as services use either MySQl/SQLite/SQL for relational database storage What should I think of before I buy a server? What hardware will I need, what will limit my server? I basically want to learn networking better as I'm a software and web developer but haven't had the resources to acquire any serious toys until now. At the time of writing, most of my websites have 60 visits/day so I don't suspect them to be very demanding. Is there something I haven't thought of that I should have? What OS would you suggest I run? FreeBSD vs Windows Server vs ?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • installed mysql using yum but mysqld dowsnt start: Fedora 16

    - by Sumit Singh Bir
    i installed mysql mysql-server and mysql-libs ... after installation i thought to start the mysql service using command systemctl start mysqld.service Failed to issue method call: Unit mysqld.service failed to load: Invalid argument. See system logs and 'systemctl status mysqld.service' for details. systemctl status mysqld.service this said that it had an invalid argument then i changed the content of mysqld.service with this File: /etc/systemd/system/mysqld.service [Unit] Description=MySQL database server After=syslog.target After=network.target [Service] User=mysql Group=mysql ExecStart=/usr/sbin/mysqld --pid-file=/var/run/mysqld/mysqld.pid ExecStop=/bin/kill -15 $MAINPID PIDFile=/var/run/mysqld/mysqld.pid # We rely on systemd, not mysqld_safe, to restart mysqld if it dies Restart=always # Place temp files in a secure directory, not /tmp PrivateTmp=true [Install] WantedBy=multi-user.target now i found that the error for invalid argument was resolved and mysqld.service was loaded but not enabled ... using systemctl start mysqld.service worked fine ... it workedddd! but then enabling it systemctl enable mysqld.service or service mysqld start did not do anything but the curdor kept on blinking after i pressed enter ... now the thing is that for this f16 i wasted my HDD space for development work and i cannot figure out a way ... please somebody help

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >