Search Results

Search found 11627 results on 466 pages for 'market share'.

Page 274/466 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • NFS inherit permissions from shared directory - Mac OS client

    - by devius
    Short question: Is there a way to have files on a NFS share on the client inherit the permissions of the shared directory? Scenario: Ubuntu 12.04 server Mac 10.7.4 client shared directory has 775 permissions created files on client have 644 permissions I tried setting ACLs with the setfacl command, as explained here, and it appears they are set on the server. getfacl returns this: # file: Documents/ # owner: someguy # group: somegroup # flags: -s- user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:group:somegroup:rwx default:mask::rwx default:other::r-x However, when I create a new file on the Mac OS client it still has 644 permissions and not the 664 I would expect. Files created on the server have the expected permissions. Files created with another Ubuntu client also have the expected permissions.

    Read the article

  • How to determine if a device is SATA driven and will be affect by the Sandy Bridge Intel Issue?

    - by joelhaus
    Looking to buy a higher-end Windows 7 laptop, but I'm concerned about the issue with the Intel Sandy Bridge chipset. Otherwise, my price range covers laptops within the latest (Sandy Bridge) generation of the Core i7 family. I understand that there is an issue with SATA ports 2-5 and I use a Windows Home Server over a WiFi connection to share files and backup my PC. The other storage devices that I will use (less frequently) are the built-in DVD-RW disc drive and various devices hooked up to the USB ports (i.e. Android devices, iPod, etc.). The question: Will this setup be negatively impacted by the problem Intel reported about Sandy Bridge? Given this information, is it unwise to purchase a laptop that has this flaw? I really don't know how to determine whether a device is SATA driven or not, so hoping someone can shed some light on this too. Thanks!

    Read the article

  • Windows 2008 R2 File Sharing - 'Access denied' if groups are specified in ACL

    - by John Smith
    I am trying to move our old Windows 2003 File Server to Windows 2008 R2. What I have noticed, however, is that the entries for groups in the ACL are being ignored. For example, a user is part of a group in active directory. If I create a folder and enable full access for this group, then share this folder (and define sharing permissions for this group), users in that group do not get access to that folder. If I make an entry in the ACL for the user itself, it works perfectly. These even applies to my domain administrator account - If I create a folder and give full control to the local administrators group / domain administrators group, and I physically log on to the server, I still do not get access - I need to explicitly define my name to proceed. I am not sure what the problem is, tried looking it up in Google to no avail Any assistance will be greatly appreciated

    Read the article

  • Shareing two internet connections on my laptop running Windows XP

    - by ashwnacharya
    I have two internet connections, one is internet via our organization's corporate LAN network, and the other one is mobile broadband via a USB modem Is there anyway I can share internet connections and use them simultaneously? I want to use the corporate LAN network for normal browsing and connecting my email client, and I want to use the USB modem for establishing a VPN connection. Will I be able to maintain both the connections simultaneously? Can I have parallel downloads, one using our corporate network, and the other one using the mobile broadband? Will I be able to switch my browser between these two connections? My laptop runs Windows XP Service Pack 2.

    Read the article

  • Auto-detect proxy settings on network

    - by Ali Lown
    I am having problems trying to run web browser software on the local network through the proxy. When running off the profile drive which is on a network share, the system is unable to auto-detect proxy settings. When running off the local C drive, the browsers are able to correctly autodetect the settings. The error from the browser is about it being unable to fetch the proxy configuration file. Is this some form of authentication preventing it retreiving the settings when running of the network location? PS. Would this be better off on superuser?

    Read the article

  • Ubuntu VM Guest - Samba Service Not Accessible from VM Host via Hostname

    - by phalacee
    I have a Windows 7 Workstation with a Ubuntu 10.10 VM running in Virtual Box 3.2.12 r68302. I recently updated Samba and winbind, and since the update, I am unable to access the machine via it's hostname (\mystique) from the VM Host. I can access it by the "Host-only" IP (\192.168.56.101) and the DHCP Assigned IP address (\10.1.1.20) and I can connect to the webserver on the machine via it's hostname (http://mystique/). As stated, accessing this machine via it's hostname worked fine prior to the update, but has since stopped working. I have added the hostname to the smb.conf for the netbios name, to no avail. My smb.conf [global] section looks like this: workgroup = NETWORK netbios name = Mystique server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user usershare allow guests = yes

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a Windows XP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try? This is driving me crazy and I'm totally out of ideas?

    Read the article

  • Regarding Reinstalling PostgreSQL

    - by Vivalavista
    I was using PostgreSQL 8.4. I tried removing it through Synaptic Manager and then I tried to install 9.1, but I still version 8.4. I deleted all the files associated with postgresql. Now I am unable to install any version of PostgreSQL. When I try I get this error: Setting up postgresql-9.1 (9.1.3-1~lucid) ... .: 12: Can't open /usr/share/postgresql-common/maintscripts-functions dpkg: error processing postgresql-9.1 (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of postgresql: postgresql depends on postgresql-9.1; however: Package postgresql-9.1 is not configured yet. dpkg: error processing postgresql (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: postgresql-9.1 postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) Please tell me the way to remove postgres completely so I can install a fresh version.

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • Is it possible to manually specify an alternative Procfile on Heroku?

    - by BillyBBone
    I have a repository which can be deployed in two modes: one is a front-end web application, while the other is a data manipulating process which runs non-stop, 24x7. The application runs on Django and connects to a Postgres database. For architectural reasons that I won't go into, I'd like to deploy the app in front-end mode inside as one Heroku application, and deploy the same app (i.e. the same git repo) in the data agent mode, as another Heroku application. Both applications will share the same Postgres connection string, and thus the data agent will feed the front-end app. Is it possible to maintain two separate Procfiles in one repo? This would cause the 3 appropriate dynos to start in front-end mode, and would spin up another process entirely in the other mode.

    Read the article

  • Windows shutdown processes termination sequence

    - by jpmartins
    I've seen today an wierd situation. I have a theory, but it would help to know more about the windows shutdown process. If you have some knowlaged about it please share. A machine was shutdown (at this moment I suspect an unexpected mantainace), on that machine there was a long running process that was interrupted. Monitorization confirms that the process did not terminated normally. Loking at the logs for the long running process it seem that was just finishing. That seems higly unprobable since it was running for more than 6 hours (witch is a bit more than the usual 5 hours). The process lanches child processes and waits for results from them, I suspect pour error control on the parent process and that the shutdown as terminated child processes before.

    Read the article

  • Why won't vsftpd let me log in with a virtual user account?

    - by Ramon
    I would like to use vsftpd with virtual users and pam_pwdfile.so. I installed vsftpd and added two users (ramon and dragon) via htpasswd to my file /etc/vsftpd.passwd. The /etc/pam.d/vsftpd is configured to use this file. auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd account required pam_permit.so @include common-account @include common-session The user "ramon" is also available in /etc/passwd. A login to the ftp with the user "ramon" works as expected. But a login using "dragon" does not :/ The result is always Login failed: 530 Login incorrect. Since it's possible that I made a mistake I tried the exact way documented in /usr/share/doc/vsftpd/examples/VIRTUAL_USERS/README. Still no luck. I can login with the user "ramon", but not with the user "dragon". Any ideas?

    Read the article

  • Why use multiple partitions on a rhel server?

    - by Jakobud
    I'm about to reformat and reinstall CentOS onto an old server. The server runs on a modest 30 node small business network and has a variety of responsibilities including MySQL, a Samba share, DHCPd & SVN/Trac. The old sysadmin had this server setup with almost a dozen different partitions for various things. I'm trying to understand what the advantages of multiple partitions are as opposed to a just one filesystem at /. Speed? Flexibility? Security? It seems like if you misjudge the necessary size for any given partition and it ends up filling up too fast, it requires a sysadmin to go in and expand the partition, etc... Seems like it would be easier if everything was just one flat / filesystem. But I'm sure there are some advantages I'm not aware of. The server is currently running a handful of HDDs raided to ~2TB (raid 0).

    Read the article

  • Sending email using SMTP (Gmail) from Hudson CI

    - by jensendarren
    How can I set up Hudson CI so that I can send out emails from the server following a build failure? At the moment all I get is the following error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.0 Must issue a STARTTLS command first One solution is to start Hudson as follows: java -Dmail.smtp.starttls.enable="true" -jar /usr/share/hudson/hudson.war However, I am already using the following to start Hudson: sudo /etc/init.d/hudson start I am thinking the solution is to somehow set the system property mail.smtp.starttls.enable in a property file somewhere, but I have no idea how to do that. What are my options? Thank you all in advance!

    Read the article

  • Building RPM containing passwords

    - by Kuf
    I need to be able to send an RPM to customers that will install the complete server, including Apache and MySQL. The customers will install it on a clean machine. After installation, the server should connect to our main DB, so I though of including the password in the RPM somehow, encrypted if possible. The reason I'm asking this is because I'm pretty sure that it's not wise to save the password in the rpm scripts. I was hoping that someone else had a similar problem and managed to solve that somehow. If anyone knows a way to do that, or have a better idea please share!

    Read the article

  • Custom Icon for NFS Volume Mount - Possible for OSX?

    - by James
    We are naming our various network volumes after Planets! I renamed the Mercury.icns icon, to .VolumeIcon.icns and copied it over to the mount point folder of the NFS server. So far remounting the NFS share does not seem to employ this icon. Looking on the NFS server, there appears to be two VolumeIcon files. Can someone tell me what I am doing wrong? Permissions? Do I need a .DS_Store file there as well?? It shouldn't be this hard! EDIT: Should have mentioned, the NFS server is Ubuntu 12.04.1. NOT an OSX server.

    Read the article

  • Sharing an Apache configuration between testing vs. production

    - by Kevin Reid
    I have a personal web site with a slightly nontrivial Apache configuration. I test changes on my personal machine before uploading them to the server. The path to the files on disk and the root URL of the site are of course different between the test and production conditions, and they occur many places in the configuration (especially <Directory blocks for special locations which have scripts or no directory listing or ...). What is the best way to share the common elements of the configuration, to make sure that my production environment matches my test environment as closely as possible? What I've thought of is to use SetEnv to store the paths for the current machine in environment variables, then Include a common configuration file with ${} everywhere there's something machine specific. Any hazards of this method?

    Read the article

  • nginx 500 error instead of 404

    - by arby
    I have the following nginx configuration (at /etc/nginx/sites-available/default) server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; server_name _; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } Instead of a 404 error, I'm getting 500 server errors on broken urls. How can I correct this?

    Read the article

  • Using NDMP as an alternative to CIFS mount

    - by user138922
    I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc. I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here. Is this even something which is achievable? Can I just transfer share that interest me instead of entire volume?

    Read the article

  • How to trigger cross-platform jobs between Windows and Unix

    - by andyb
    I have an application which has components on Windows and Unix. I need to run overnight jobs which initiate tasks/jobs on both environment. These need to be sequenced so I cannot simple use Cron / Task Scheduler. For each job, there will be a controlling script either on Windows or Unix, but this script will need to initiate jobs on the other environment and detect when they are complete and a success/failure code. In the past I have achieved this using 'flag files' on a samba share. This worked but required 'polling' behaviour on the receiving end which I do not consider optimal. I would prefer not to have to embed user credentials in script if possible.

    Read the article

  • Running Webapp on Mac in UTC (either changing MacBook timezone or tomcat timezone)

    - by Andy A
    To run my web app, I need to set my timezone to UTC on my MacBook. I can do this temporarily by opening a Konsole and entering sudo ln -sf /usr/share/zoneinfo/UTC /etc/localtime However, my timezone returns to normal when I restart my machine! Any advice? Edit : The response to this question by 'Celada' implies that I can just make my Server UTC. I am using Apache Tomcat 7. Adding to Celada's response, how can I make it UTC? Update - 3rd April : Following Celada's response, I have tried adding SetEnv TZ UTC at the top of startup.sh. This didn't seem to make a difference. After some research, I tried adding export JAVA_OPTS="-Duser.timezone=UTC" to startup.sh, but this too had no effect. Am I adding the correct command to the correct file?

    Read the article

  • Best way to backup and restore millions of files

    - by bongo
    Hi, I'm facing a rebuilding of the volume on which I host the mail storage (kerio mailserver, which uses maildirs). I need to backup and restore as quick as possible the 3.5+ millions (for about 600GB) small files of the store directory. It takes more than 12 hours via rsync to a NFS share, but I also have a 1TB firewire 800 raid1 disk that I can use (from some preliminary tests it's faster). I'm working off a XServe intel. What is the fastest way to do it? Rsync? Finder copy? tar?

    Read the article

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • File Sharing: User-created folders are read-only to others on Mac 10.6 Server

    - by Anriëtte Combrink
    Hi there We recently got a new Mac Mini Server with 10.6 Server on it. It has two 500GB volumes, one of which [Macintosh HD2 the extra one other than the boot disk] we are using to share our work files. I have added a user account for each user in the Users pane on Server Preferences, and all our staff (users added to the system) are added to a new group, called toolboxstaff. Now, when a user creates a new folder on this volume, folders are created with read-only access for everyone else besides the owner. How do I set it that when a user creates a folder, it creates it with RW access for the toolboxstaff group? Thanks in advance.

    Read the article

  • Two-state script monitor not auto-resolving in SCOM

    - by DeliriumTremens
    This script runs, and if it returns 'erro' an alert is generated in SCOM. The alert is set to resolve when the monitor returns to healthy state. I have tested the script with cscript and it returns the correct values in each state. I'm baffled as to why it generates an alert on 'erro' but will not auto-resolve on 'ok': Option Explicit On Error Resume Next Dim objFSO Dim TargetFile Dim objFile Dim oAPI, oBag Dim StateDataType Dim FileSize Set oAPI = CreateObject("MOM.ScriptAPI") Set oBag = oAPI.CreatePropertyBag() TargetFile = "\\server\share\file.zip" Set objFSO = CreateObject("scripting.filesystemobject") Set objFile = objFSO.GetFile(TargetFile) FileSize = objFile.Size / 1024 If FileSize < 140000 Then Call oBag.AddValue("State", "erro") Else Call oBag.AddValue("State", "ok") End If Call oAPI.AddItem(oBag) Call oAPI.Return(oBag) Unhealthy expression: Property[@Name='State'] Equals erro Health expression: Property[@Name='State'] Equals ok If anyone can shed some light onto what I might be missing, that would be great!

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >