Search Results

Search found 111524 results on 4461 pages for 'user mode linux'.

Page 447/4461 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Allow multiple remote desktop connections from same user

    - by Shaharyar
    Hello everybody We're just setting up a brand new Server 2008 R2 with Remote Desktop services. Everything is installed fine so far, we have set it up with 5 CALs per User (not Machine!) and they are activated and running. The problem / question here is: Is it possible to log in with the same user multiple times? This worked perfectly fine in Windows Server 2003 We just want it to start a new session on the server with the same user. Has anyone of you got experience in that? Thanks!

    Read the article

  • Windows 7 forgets my default settings

    - by j-t-s
    Hi All I recently bought a new computer and Windows 7 Home Premium. I only have one small problem though. I have the option "Show Window Contents While Dragging" enabled, but everytime I restart the computer, it reverts back to DISabled. The only thing i could think of is the system requirements etc. But this is not the case as my computer more than meets the full requirements. Can somebody help me please? Thank you

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • I used disk copy to clone my drive, now my windows 7 profile won't load correctly

    - by RzK
    I used easeuse disk copy, after acronis, clonezilla, windows image restore failed me. Basically it copys all sectors, I set it to skip bad sectors(40). The source drive works, it just gave me a couple errors and stopped booting at one point. The drive is an identical copy, minus 40 bad errors. The drive is set to C and active partition, I rebuilt the boot order. I've ran sfc /scannow and ran chkdsk /r chkdsk found 20kb of bad sectors if I remember right. Now the issue I get is when I log into my profile which was saved right, I get a blank light blue wallpaper (non-license) explorer.exe is not running, and there are only 4 processes running in taskmanager, including taskmanager. I would try a repair install but CRTL-E would not open, nothing will open once I force start explorer.exe, almost like all services are down. What should I do? Fresh install is almost not a possibility I will try and fix this issue. sfc /scannow /offbootdir=c:\ /offwindir=c:\windows returns "Windows Resource Protection could not perform the requested operation"

    Read the article

  • Make exe or bat require admin privileges UAC

    - by petebob796
    I am trying to create an install CD to install multiple windows updates and hotfixes in one. The Autorun.inf launching a .bat (or .exe) running each update in turn. Currently if I run this .bat each update brings up a UAC prompt individually which can be annoying. However if I run the .bat as administrator it can launch and install each update with just one prompt. Is there a way to force the bat (or .exe) to need admin priviledges no matter who runs it.

    Read the article

  • win2008 r2 IIS7.5 - setting up a custom user for an application pool, and trust issues

    - by Ken Egozi
    Scenario: blank win2008 r2 install the goal was to have a couple of sites running with isolated pool and dedicated users A new folder for a new website - c:\web\siteA\wwwroot, with the app (asp.net) deployed there in the /bin folder created a user named "appuser" and added it to the IIS_USERS group gave the website folder read and execute permissions for IIS_USERS and the appuser created the IIS site. set the app=pool identity to the appuser now I'm getting YSOD telling me that the trust-level is too low - SecurityException: That assembly does not allow partially trusted callers Added <trust level="Full" /> on the web-config, did not help changing the app-pool user to Administrator makes the site run Setting "anonymous user identity" to either IUSR or the app pool identity makes no difference any idea? is there a "step by step" howto guide for setting up users for isolated app pools on IIS7.5?

    Read the article

  • NFS inherit permissions from shared directory - Mac OS client

    - by devius
    Short question: Is there a way to have files on a NFS share on the client inherit the permissions of the shared directory? Scenario: Ubuntu 12.04 server Mac 10.7.4 client shared directory has 775 permissions created files on client have 644 permissions I tried setting ACLs with the setfacl command, as explained here, and it appears they are set on the server. getfacl returns this: # file: Documents/ # owner: someguy # group: somegroup # flags: -s- user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:group:somegroup:rwx default:mask::rwx default:other::r-x However, when I create a new file on the Mac OS client it still has 644 permissions and not the 664 I would expect. Files created on the server have the expected permissions. Files created with another Ubuntu client also have the expected permissions.

    Read the article

  • after installing monit when i do monit status myproc i get "error connecting to the monit daemon"

    - by Jason
    after installing monit when i do monit status myproc i get "error connecting to the monit daemon" I read somewhere that The status command won't work in the case that monit is running indaemon mode without its http support - the command 'monit status' in such case tries to get the status from the daemon via http/tcp. To start the http interface you need to add the 'set httpd ...' statement to theconfiguration. is that still correct? that post was from 2005

    Read the article

  • AWS EC2 can't execute user-data script

    - by Bloodnut
    I'm pretty new to AWS and EC2 but I want to run instances with a user script after it's booted from another instance. I have installed ec2 tools and ran the command as it's explained in various examples like here http://www.turnkeylinux.org/blog/ec2-userdata and Eric Hammond's tutorials. however when I actually use the command: "ec2-run-instances --key my-key --user-data-file myscript my-ami" it only runs the new instance but doesn't execute the script myscript contains: #!/bin/bash echo "hello" ~/output.txt I'm running ubuntu server 12.04 AMIs. the target AMIs are duplicates of the initiating instance. if I run curl http:// 169.254.169.254/latest/user-data the imported script is there.

    Read the article

  • How to disable text overwrite mode in Netbeans (CentOS)?

    - by Kevin Lee
    Everytime I type some text, it is overwriting what I have typed. I assume that the mode is set to overwriting, I want to insert the text not overwrite it, but I can't disable it because my insert key is mixed up with my delete key so everytime I enter insert to disable the overwrite mode, it just delete what I type. So how to disable this? I'm using centOS.. and it seems that my problem is only related to Netbeans because when I type here, it is set to insert mode.. but in Netbeans, it just overwrites the codes! help!

    Read the article

  • calling a different python interpreter from bash command line

    - by Dennis Daniels
    I have python 2.7 installed [user@localhost google_appengine]$ python Python 2.7 (r27:82500, Sep 16 2010, 18:03:06) [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. I want to use the python 2.5.2 that is in this directory [user@localhost Downloads]$ ls |grep "Python-2*" Python-2.5.2 Python-2.5.2.tgz to run a python script in Khan Academy platform against a google app engine application sudo python sample_data.py -a ~/workspace/GAE/google_appengine/appcfg.py upload Currently, when running the last script 2.7 python complains a lot (Google App Engine runs on 2.5.2 mostly and 2.6 almost) I would like to do something like sudo python env set ~/Downloads/Python-2.5.2 sample_data.py -a ~/workspace/GAE/google_appengine/appcfg.py upload Is this possible? If yes, please point the way. If not, please suggest a way to call python2.5.2 WITHOUT having to uninstall python 2.7 many many thanks Dennis

    Read the article

  • sendmail on Ubuntu won't send from www-data user

    - by bumperbox
    I if call mail() function in PHP from webserver (running as www-data) i get an error sending email. If i call the same script from the cmdline logged in as root, then it works If i switch user to www-data and run from the cmdline i get this error message WARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) can not chdir(/var/spool/mqueue-client/): Permission denied Program mode requires special privileges, e.g., root or TrustedUser. FAILEDWARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) can not chdir(/var/spool/mqueue-client/): Permission denied Program mode requires special privileges, e.g., root or TrustedUser. FAILEDTest Complete$ WARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) I am guessing i need to do something in sendmail configuration I have googled for some solutions but have ended up more confused. Can someone let me know what configuration I need to change to fix so i can send from www-data user?

    Read the article

  • How to give wife emergency access to logins, passwords, etc.?

    - by Torben Gundtofte-Bruun
    I'm the digital guru in my household. My wife is good with email and forum websites but she trusts me with all our important digital stuff -- such as online banking and other things that require passwords, but also family photos and the plethora of other digital things in a modern home. We discuss relevant actions but it's always me that executes the actions. If I should get "hit by a bus" then my wife would be thoroughly stranded -- she would have no idea what digital stuff is where on our computer, how to access it, what online accounts we have, and their login credentials are. It would also leave my many public appearances (personal websites, email accounts, social networks, etc.) unresolved. To complicate things, I'm one of those people who don't use password as my password everywhere; I use a mix of SuperGenPass and LastPass, and also two-factor authentication whenever possible. I don't have much hope that she would find her way through a written explanation of all that in a stressful situation. I could just tell her that she should ask my tech-savvy twin brother and then entrust him with my LastPass master passphrase. I feel that would have a high chance of success, but it's inelegant and leaves my wife without control of the information. How can I ensure that my wife has access to my digital remains?

    Read the article

  • MySQL: creating a user that can connect from multiple hosts

    - by DrStalker
    I'm using MySQL and I need to create an account that can connect from either the localhost of from another server, 10.1.1.1. So I am doing: CREATE USER 'bob'@'localhost' IDENTIFIED BY 'password123'; CREATE USER 'bob'@'10.1.1.1 IDENTIFIED BY 'password123'; GRANT SELECT, INSERT, UPDATE, DELETE on MyDatabse.* to 'bob'@'localhost', 'bob'@'10.1.1.1; This works fine, but is there any more elegant way to create a user account that is linked to multiple IPs or does it need to be done this way? My main worry is that in the future permissions will be updated form 'bob' account and not the other.

    Read the article

  • Ideas for campus Internet Login mechanism?

    - by miCRoSCoPiCeaRthLinG
    Hello, I work at this university and I'm seeking an effective solution for an internet login mechanism. We have a leased-link at our campus, which is shared by both staff & students. All systems (desktops + laptops + handhelds) connect to the internal network via wifi and can then get onto the net. However, a local govt. regulation requires us to keep track of individual internet usage and hence we need a solution (pref. free / opensource) that'll enable us to implement some sort of an authentication mechanism once a user hooks onto the network. One requirement is that the software should be able to authenticate either against LDAP or some other custom user database (MySQL based) or both. Can anyone suggest any such software or mechanism? Most of our servers are Linux based... so something that runs off such a platform will be good. Thanks, m^e

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • Can not su to normal user

    - by Summer Nguyen
    I have a centos 5.8 box with gitolite installed . It worked fine until I yesterday my gitolite didn't work. ( fatal the remote end hung up unexpectedly) I logged to the box using root account. and then su to git user but I can't. I test again by creating a new user , but I also can not su to that user. Any idea ?, thank you very much. P.S: I installed postfix the day before , but I'm not sure if postfix cause the problem.

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • SSSD Authentication

    - by user24089
    I just built a test server running OpenSuSE 12.1 and am trying to learn how configure sssd, but am not sure where to begin to look for why my config cannot allow me to authenticate. server:/etc/sssd # cat sssd.conf [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss,pam domains = test.local [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 # Section created by YaST [domain/mose.cc] access_provider = ldap ldap_uri = ldap://server.test.local ldap_search_base = dc=test,dc=local ldap_schema = rfc2307bis id_provider = ldap ldap_user_uuid = entryuuid ldap_group_uuid = entryuuid ldap_id_use_start_tls = True enumerate = False cache_credentials = True chpass_provider = krb5 auth_provider = krb5 krb5_realm = TEST.LOCAL krb5_kdcip = server.test.local server:/etc # cat ldap.conf base dc=test,dc=local bind_policy soft pam_lookup_policy yes pam_password exop nss_initgroups_ignoreusers root,ldap nss_schema rfc2307bis nss_map_attribute uniqueMember member ssl start_tls uri ldap://server.test.local ldap_version 3 pam_filter objectClass=posixAccount server:/etc # cat nsswitch.conf passwd: compat sss group: files sss hosts: files dns networks: files dns services: files protocols: files rpc: files ethers: files netmasks: files netgroup: files publickey: files bootparams: files automount: files ldap aliases: files shadow: compat server:/etc # cat krb5.conf [libdefaults] default_realm = TEST.LOCAL clockskew = 300 [realms] TEST.LOCAL = { kdc = server.test.local admin_server = server.test.local database_module = ldap default_domain = test.local } [logging] kdc = FILE:/var/log/krb5/krb5kdc.log admin_server = FILE:/var/log/krb5/kadmind.log default = SYSLOG:NOTICE:DAEMON [dbmodules] ldap = { db_library = kldap ldap_kerberos_container_dn = cn=krbContainer,dc=test,dc=local ldap_kdc_dn = cn=Administrator,dc=test,dc=local ldap_kadmind_dn = cn=Administrator,dc=test,dc=local ldap_service_password_file = /etc/openldap/ldap-pw ldap_servers = ldaps://server.test.local } [domain_realm] .test.local = TEST.LOCAL [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false minimum_uid = 1 clockskew = 300 external = sshd use_shmem = sshd } If I log onto the server as root I can su into an ldap user, however if I try to console locally or ssh remotely I am unable to authenticate. getent doesn't show the ldap entries for users, Im not sure if I need to look at LDAP, nsswitch, or what: server:~ # ssh localhost -l test Password: Password: Password: Permission denied (publickey,keyboard-interactive). server:~ # su test test@server:/etc> id uid=1000(test) gid=100(users) groups=100(users) server:~ # tail /var/log/messages Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): system info: [Client not found in Kerberos database] Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): authentication failure; logname=LOGIN uid=0 euid=0 tty=/dev/ttyS1 ruser= rhost= user=test Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): received for user test: 4 (System error) Nov 24 09:36:44 server login[14508]: FAILED LOGIN SESSION FROM /dev/ttyS1 FOR test, System error server:~ # vi /etc/pam.d/common-auth auth required pam_env.so auth sufficient pam_unix2.so auth required pam_sss.so use_first_pass server:~ # vi /etc/pam.d/sshd auth requisite pam_nologin.so auth include common-auth account requisite pam_nologin.so account include common-account password include common-password session required pam_loginuid.so session include common-session session optional pam_lastlog.so silent noupdate showfailed

    Read the article

  • Setting user calendar permissions on Exchange 2007

    - by blizz
    We have Exchange 2007 with about 100 users. I would like to change everyone's free/busy permissions to grant Reviewer status to a specific AD group. I have tried PFDAVAdmin tool but when I commit any changes, they do not affect the users. If I grant myself Reviewer permissions to another user's calendar using the tool, I still cannot view that user's free/busy details, and I also don't show up on the list of people with permissions on that user's Outlook calendar options. It seems like PFDAVAdmin simply appears to do something, but doesn't actually change anything. Is there any other way for me to accomplish what I need to do? Or is there something I may not be doing right with PFDAVAdmin? FYI I have followed directions from this link: http://exchangeshare.wordpress.com/2008/05/27/faq-give-calendar-read-permission-on-all-mailboxes-pfdavadmin/

    Read the article

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • User not found for cn=config in OpenLDAP?

    - by Nick
    We're running OpenLDAP on Ubuntu 10.04. I'm able to access and use the front end with cn=admin,dc=ourcompany,dc=com and my password. But I'm unable to change the server's configuration (like loglevel) stored in cn=config because I don't seem to have a valid user/password for the backend? Some examples: # ldapsearch SASL/DIGEST-MD5 authentication started Please enter your password: ldap_sasl_interactive_bind_s: Invalid credentials (49) additional info: SASL(-13): user not found: no secret in database or # ldapadd -x -D "cn=admin,cn=config" -W -f "my.ldif"" Enter LDAP Password: ldap_bind: Invalid credentials (49) How do I create a user for the cn=config backend?

    Read the article

  • Windows AD, bulk user creation, homedrv creation via commandline

    - by Neil
    I am Bulk creating AD users from the commandline (dsadd) and whilst doing so am setting the homedir and homedrv to a DFS location. I observe when I create the user with all these settings via the GUI (dsa.msc) that the homedrv gets created on the DFS share with all the permissions set correctly. But when using dsadd, the folder is not created. How can I replicate this GUI behaviour via the commandline when creating the user? I don't really want to rely on logon scripts to set it up. Do I have to use mkdir and cacls and something else to give the user Ownership? Or maybe I am missing something easy. Any help much appreciated!

    Read the article

  • Load balancing with rsync

    - by David
    i have 2 server with public ip: SERVER A - 10.10.10.11 SERVER B - 10.10.10.12 both of them are centos 6 in OS, installed nginx with php-fpm, 2 exact same website stored at: /var/www/html. Domain with: myxdomain.com and dns hosted with cloudflare ( since cloudflare do support round robin ) to point the domain to A record of 10.10.10.11 and 10.10.10.12. I know that round robin dns does not cover the failover or fallover, but it does not matter, what i need is: How do i sync the both content of /var/www/html server A and server B to be exactly same? Lets say: 1) user uploaded their file to server A, the file content will be sync to server B as well. 2) user uploaded their file to server B, the file content will be sync to server A as well. rsync will be good choice here? Any example of command line and cronjob time that suitable? thanks

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >