Search Results

Search found 95574 results on 3823 pages for 'mac osx server'.

Page 1832/3823 | < Previous Page | 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839  | Next Page >

  • Can I get redundancy with a JBOD storage subsystem

    - by Dat Chu
    I have a Promise Technology J610S. This is a JBOD subsystem. Is it possible for me to buy a SAS hardware RAID controller and provide some type of redundancy for these drives? I am unsure whether I will use Linux or Windows yet so an answer with enumeration for both would be highly appreciated. One solution that I thought of was: if my J610s can export each drive as a target, my server will simply see 16 drives. The RAID controller can then perform the RAID5/RAID6 if I want.

    Read the article

  • Encrypt shared files on AD Domain.

    - by Walter
    Can I encrypt shared files on windows server and allow only authenticated domain users have access to these files? The scenario as follows: I have a software development company, and I would like to protect my source code from being copied by my programmers. One problem is that some programmers use their own laptops to developing the company's software. In this scenario it's impossible to prevent developers from copying the source code for their laptops. In this case I thought about the following solution, but i don't know if it's possible to implement. The idea is to encrypt the source code and they are accessible (decrypted) only when developers are logged into the AD domain, ie if they are not logged into the AD domain, the source code would be encrypted be useless. Can be implemented this ? What technology should be used?

    Read the article

  • port forwarding using 3 static ip addresses

    - by Danny
    I am new to configuring routers. We have purchased a RV016 Cisco business router that has multiwan capability. What we are attempting to do is take map services from 3 different servers and assign 3 different static IP addresses and then forward port 80 through the router. A short term solution to building a proxy server. Is this possible? Right now we have a consumer grade Cisco router and assign a static IP and it works, we attempted the same settings on the business router and cannot get to the internet. We set it DHCP and it works fine, however we want to to forward the static ports not use DHCP.

    Read the article

  • Spam mail through SMTP and user spoofing

    - by Josten Moore
    I have noticed that it's possible to telnet into a mailserver that I own and send spoofed messages to other clients. This only works for the domain that the mail server is regarding; I cannot do it for other domains. For example; lets say that I own example.com. If I telnet example.com 25 I can successfully send a message to another user without authentication: HELO local MAIL FROM: [email protected] RCPT TO: [email protected] DATA SUBJECT: Whatever this is spam Spam spam spam . I consider this a big problem; how do I secure this?

    Read the article

  • email archive for multiple users

    - by evanmcd
    Hi, I'm moving a web site from one server to another, and am realizing that I need to move the name servers for the domain as well (they are set to the current host, not to the registrar). So, knowing that email services will stop as soon as I switch the DNS, I'm scrambling to figure out how to archive and make available email data for folks that have mostly been using webmail for the past few years, and may not even have a computer on which to install a client to download the mail to. What does one do in this situation? Thanks for any help offered! Evan

    Read the article

  • sudoers security

    - by jetboy
    I've setup a script to do Subversion updates across two servers - the localhost and a remote server - called by a post-commit hook run by the www-data user. /srv/svn/mysite/hooks/post-commit contains: sudo -u cli /usr/local/bin/svn_deploy /usr/local/bin/svn_deploy is owned by the cli user, and contains: #!/bin/sh svn update /srv/www/mysite ssh cli@remotehost 'svn update /srv/www/mysite' To get this to work I've had to add the following to the sudoers file: www-data ALL = (cli) NOPASSWD: /usr/local/bin/svn_deploy cli ALL = NOEXEC:NOPASSWD: /usr/local/bin/svn_deploy Entries for both www-data and cli were necessary to avoid the error: post commit hook failed: no tty present and no askpass program specified I'm wary of giving any kind of elevated rights to www-data. Is there anything else I should be doing to reduce or eliminate any security risk?

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • PHP fopen fails - does not have permission to open file in write mode.

    - by George
    Hello. I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory. I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that. So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 * Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

    Read the article

  • Remote X-windows between new RHEL5 and old Solaris 8

    - by joshxdr
    I have a very small lab network with three boxes: a modern x86-based RHEL3 box, an x86-based RHEL5 box, and a 1998-vintage SPARC Ultra5 with Solaris 8. I can use ssh -X to run a program on the RHEL5 box and view the windows on the RHEL3 box. I believe this uses xauth and magic cookies?? I have followed the X-Windows HOWTO to set up xauth on the Solaris box, but so far no dice. I would like to be able to use the X-windows server on the RHEL3 box with a client program on the Solaris box (program running on Solaris host, windows appearing at Linux host). Is there a trick to this, or have I made a mistake following the instructions for setting up xauth and magic cookie?

    Read the article

  • CGI error from PHP when running exec() on IIS

    - by Patrick
    Windows Server 2003 x64 PHP 5.2 IIS 6.0 The program Ink2Png.exe is set with Everyone-Read and Execute permissions. As does its dependency (microsoft.ink.dll) PHP Safe Mode is off exec() is passed [the full exe path], space, [full path to another file] This other file also has full read permissions. The output directory has full write permissions. As soon as exec() is hit, the connection dies, the browser does not even receive a full set of http headers, and it reports a CGI error. Examining the output, it appears the program was not even run. Any ideas? How can I figure out what exactly is happening and get it running again? EDIT: Also, it is a .NET application, if that is significant in any way.

    Read the article

  • how to manage credentials/access to multiple ssh servers

    - by geoaxis
    I would like to make a script which can maintain multiple servers via SSH. I want to control the authentication/authorization in such a manner that authentication is done by gateway and any other access is routed through this ssh server to internal services without any further authentication/authorization requirements. So if a user A can log into server_1 for example. He can then ssh to server_2 without any other authentication and do what ever he is allowed to do on server_2 (like shut down mysql, upgrade it and restart it. This could be done via some remote shell script). The problem that I am trying to solve is to come up with a deployment script for a JavaEE system which involves databases and tomcat instances. They need to be shutdown and re-spawned. The requirement is to have a deployment script which has minimal human interaction as possible for both developers and operation.

    Read the article

  • Passenger package not found after adding phusion repo Ubuntu 12.04

    - by speshak
    I'm trying to install the official Passenger (and Nginx) packages from Phusion on an Ubuntu 12.04.3 server. I have the following in /etc/apt/sources.list.d/passenger: deb https://oss-binaries.phusionpassenger.com/apt/passenger precise main Even after running apt-get update there is no passenger package found by apt. I did verify that the package info appears in /var/lib/apt/lists/oss-binaries.phusionpassenger.com_apt_passenger_dists_precise_main_binary-amd64_Packages but at this point I'm at a loss as to why the package isn't available via apt-get. There are some packages (libapache2-mod-passenger, passenger-docs) that are available. These packages seem to also exist in universe, but apt-cache show lists both locations.

    Read the article

  • Ubuntu+Mono+Postgres+ASP.NET 4.0. No problem?

    - by wreck_of_u
    Would this be ok? I'm an ASP.NET developer and I'm planning to build "portable" web app servers based on Atom D510 mini-ITX. I have ran Ubuntu 10 with MySQL along with a separate IIS machines (win 2k3, 2k8) before with no problems. But now I'm thinking of "packaging" a web/db server into one small, cheap machine. I thought of Ubuntu/Mono/Postgres/ASP.NET, that it would be a good idea but I'm not sure? I have not actually tried it yet. Your thoughts?

    Read the article

  • Accessing shared resource on local computer from users of different physical location

    - by Joe
    Sounds like easy task to some but such a difficult task for me to do... The main requirement for this task is to setup something in offices located on different locations, so (1st question) users are able to log on to the domain without VPN when they are in one of the offices. Additionally, (2nd question)how they can log on to the domain server when they are on the road like in a starbuck, what do they have to do to connect to domain after VPN connection are successful. also it's my understanding that, we can't share resource from computers on different network segments, (3rd question)what is the best solution to bridge/combine two network segments(two office in different locations) so computers of different location can see each other. Thank you in advance for any response.

    Read the article

  • How to make nginx only respond to one domain?

    - by larryzhao
    I am pretty new to nginx, I host my rails application on nginx+passenger. I want my website to be accessible to only one domain. So I set my nginx conf like the following: server { listen 80; server_name mydomain.com www.mydomain.com; root /var/deploy/myapp/current/public; passenger_enabled on; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; add_header Cache-Control public; } } I specify the server_name directive, but still, it answers anything which points to this IP and I could see that in the access.log that it answers to other domain names. Is there anything I am doing wrong?

    Read the article

  • How do I change the NGINX user?

    - by danielfaraday
    I have a PHP script that creates a directory and outputs an image to the directory. This was working just fine under Apache but we recently decided to switch to NGINX to make more use of our limited RAM. I'm using the PHP mkdir() command to create the directory: mkdir(dirname($path['image']['server']), 0755, true); After the switch to NGINX, I'm getting the following warning: Warning: mkdir(): Permission denied in ... I've already checked all the permissions of the parent directories, so I've determined that I probably need to change the NGINX or PHP-FPM 'user' but I'm not sure how to do that (I never had to specify user permissions for APACHE). I can't seem to find much information on this. Any help would be great! (Note: Besides this little hang-up, the switch to NGINX has been pretty seamless; I'm using it for the first time and it literally only took about 10 minutes to get up and running with NGINX. Now I'm just ironing out the kinks.)

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • Apache2, can't apply Directory access

    - by skomak
    Hi, i can't figure out how apply deny access to a directory. Here is my config: <VirtualHost x.x.x.x:80> DocumentRoot /var/www/html/wwwhtml ServerName mydomain.com ServerAdmin [email protected] ErrorLog /var/log/httpd/mydomain_error.log TransferLog /var/log/httpd/mydomain_access_log Alias /test /var/www/html/wwwhtml/eventum <Directory /var/www/html/wwwhtml/eventum> Order deny,allow Deny from all #Allow from 192.168.0 </Directory> I deny access to /test but it doesn't work, on my another server it works perfectly :/ Do you know what can cause that problem? How to solve it? It is not whole config but the most important part. Maybe file rewrites can cause it? Thanks in advance.

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • Remote desktop to my KVM virtual machine

    - by user6
    I've got a dedicated server running Debian 6. I've set up a windows 7 virtual machine using KVM. Now I'm trying to get Remote desktop working. I'm guessing i have to do some port forwarding. The virtual machine is in a NAT. Remote desktop is already set up on it (another virtual machine can connect). I've tried using the iptables and countless of virsh commands of which I'm not even sure what they did. Anyone knows how to get this working?

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • Alternatives to Crashplan for VPS?

    - by Chloe
    I use SFTP Net Drive to mount a remote VPS so I can back it up. However, it's taken over 3+ days to scan! I ran 'ls -lR' from my desktop over the mounted network drive and it only took about 5m to list all the files! There are only about 5000 files and 2 GB. I know Crashplan can run headless on the VPS itself, but that sounds like a pain to set up, and it takes so much memory on the server. The VPS doesn't have a lot of memory to spare - it's less than my desktop. Is there another program that can communicate with a Crashplan backup protocol and has a command line interface? backup /home

    Read the article

  • Avoid cache overflow in Atempo LiveBackup

    - by Vebjorn Ljosa
    When attempting the initial backup of a new client, Atempo LiveBackup seems to require a very large cache. For instance, a 20 GB cache is not enough to back up a computer that has 100 GB of data. It appears that LiveBackup is adding new files to the cache at a faster rate than it can send them to the server. When the cache fills up, the backup fails. Aside from removing most data from the computer and then add them back gradually after the initial backup, is there a good workaround? Is it possible to make LiveBackup slow down its scan so as to not fill the cache? Or is it possible to place the cache on an external drive?

    Read the article

  • Database backup regardless of backup made through a control panel?

    - by developer
    I know that all CMS or CMC platforms have some sort of walkthrough for their users to fully backup and restore the database or the whole website. While we all can perform such backups (and there are even plugins which automate the whole procedure) and restore them in necessity (such as when we migrate to a new server), one can also backup the whole website by means of such control panels as Directadmin or Cpanel. Now, I just want to know if it is necessary for us to do database or website backups the way they are described by a specific CMC or CMS developer even after we perform a whole website backup in a control panel such as DA? An example of such CMS platforms is moodle. Moodle Docs describes how we can backup and restore moodle in here. So do we, out of necessity, have to make the backup the way described, or we can simply do it the Control panel way? Thanks

    Read the article

  • Why standard, virtual host Drupal 7 config causes 403 (Forbidden) in Apache2?

    - by drupality
    Virtual host declaration causing the problem (source): <VirtualHost *:80> ServerAdmin admin@d7 DocumentRoot /vagrant/d7 ServerName www.d7.local ServerAlias d7.local RewriteEngine On RewriteOptions inherit <Directory /vagrant/d7> Order allow,deny Allow from all </Directory> <Directory /vagrant> Order allow,deny Allow from all </Directory> </VirtualHost> error logs: [Mon Nov 04 12:23:11.947082 2013] [authz_core:error] [pid 2471] [client 10.0.2.2:58238] AH01630: client denied by server configuration: /vagrant/d7/ I have no idea why this isn't work... With above rule I have forbidden on drupal site and apache welcome page too (index.html) ls -ld /vagrant/d7 command output: drwxrwxrwx 1 vagrant vagrant 8192 Nov 4 10:05 /vagrant/d7

    Read the article

< Previous Page | 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839  | Next Page >