Search Results

Search found 10747 results on 430 pages for 'password'.

Page 354/430 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Nginx Server Block Port 8081 Path to Root Folder

    - by Pamela
    I'm trying to password protect all of port 8081 on my Nginx server. The only thing this port is used for is PhpMyAdmin. When I navigate to https://www.example.com:8081, I successfully get the default Nginx welcome page. However, when I try navigating to the PhpMyAdmin directory, https://www.example.com:8081/phpmyadmin, I get a "404 Not Found" page. Permission for my htpasswd file is set to 644. Here is the code for my server block: server { listen 8081; server_name example.com www.example.com; root /usr/share/phpmyadmin; auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } I have also tried entirely commenting out #root /usr/share/phpmyadmin; However, it doesn't make any difference. Is my problem confined to using the incorrect root path? If so, how can I find the root path for PhpMyAdmin? If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Samba between Ubuntu server 10.10 and Windows Vista, Windows 7

    - by chepukha
    Hi all, I have a linux box running Linux server ubuntu 10.10. I have installed Samba on this linux box and want to share files with my laptops which run Windows Vista home and Windows 7 home. I have been struggling with the setup for almost a month but couldn't get it right. If I try to access share folder from Windows Vista, I get message "Windows cannot access \\server_ip_address". Error code: 0x80070035. The network path was not found. If I access from Windows 7, then after entering password to login I can see the list of share folders on Linux box. But if I click on a share folder, I get the same error message as above. Tail /var/log/samba/log.windows7-pc I got the following message: [2011/03/16 00:17:41.427238, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service sharemedia, path /root/sharemedia Here is my setting in smb.conf [global] share modes = yes netbios name = Samba workgroup = WORKGROUP wins support = yes encrypt passwords = true [sharemedia] comment = Tesing sharing using Samba path=/root/sharemedia/ public = yes valid users = samba_usr_name ; make sure all files are sensible permissions create mask = 0660 force create mask = 0660 directory mask = 2770 force directory mask = 2770 directory security mask = 0000 ; Normal share parameters read only = no browseable = yes writable = yes guest ok = no

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • Steps after installing vCenter Server?

    - by goober
    I'm working with: Two new ESX servers that I'm configuring A new Server 2008 R2 machine that I'm using for vCenter. I took the following steps: Installed the Hypervisor on the 2 ESX machines Checked their setup/connectivity (appears to be fine; can ping, etc.) Installed vCenter Server on the Win2k8R2 box. This included the install of a SQL Express database (we're a small shop) FYI, I changed some of the ports (443 -- 8443, 80 --8080, etc.) Installed vCenter Web Client Server on the Win2k8R2 box Problems my vSphere Client on my Desktop fails to connect. Part of this is that it asks me for a username and password, but I don't recall specifying one when I set up the install. I receive the error "vSphere Client could not connect to [machinename]. An unknown connection error occurred. (The request failed because of a connection failure. (Unable to connect to the remote server))" I have also tried to use local machine admin credentials, including the format machinename\localuseracct. I have also tried using my domain credentials which are an admin for that box. I have also checked and the service is running. I also tried to connect via vSphere client locally installed on the server. It translates "localhost" to the correct name but gives the same error. I cannot register the vCenter server from the vCenter Web Client Server. I'm not sure if this is necessary, as they're both on the same machine, but it seems like the logical next step. I also receive a "failed to connect" error in this case as well. FYI, both the vCenter server and the vCenter Web Client Server are installed on the same Win2k8R2 server. What am I missing here? What is the best way to test in this case?

    Read the article

  • Remote desktop solution where the desktop sharing party contacts the computer it wants to share with

    - by Kent
    I'm in a situation where I act as a sort of techinical support to my family and less techinically experienced friends. I'm looking for a remote desktop solution where it's possible to setup a "zero-install, double click an icon"-solution where the client computer contacts me so that I may interact with their desktop. The last part is important as the people in need of my help don't know how to configure their router or even the firewall software on their own computer. They are able to click an accept button when asked if a program should be able to make outgoing connections. They have many different kinds of routers, as well as software firewalls, and I rather not deal with the problem of how to connect to them using whatever as well as the actual problem they are having. It must be: Free of charge for non-commercial use. Possible to use it in a mode where the computer wanting to share its desktop should be able to make a connection to my computer. My computer has a DNS name we can use. Compatible with both Windows XP and Windows 7. Independent of a third party server or infrastructure. Explanations of the above: I don't want to spend money on it when I help them for free. If it's free as in freedom, all the better! I guess this boils down to being callable like showdesktopto.exe opscomputer.com where opscomputer.com is my computers DNS name. If that is possible then I can create a shortcut they can use to connect to me when they need help. It's nice if it's possible to specify a password or key file which I can use to authenticate myself, but it's not required. They use the OS which their machine comes installed with. That means Windows XP or 7. I want something which will work in the long run. Using a third party service which might not be available when I need it disqualified such solutions.

    Read the article

  • OS/X 10.6 Bizarre login bug: Making alternative "Others..." appear. Why does this happen?

    - by bjornl
    I am studying at NUS in Singapore, and they have a mac-equipped computer lab here at school. All users (students) have our own personal accounts that we use to log in to the computers with. Sometimes when you approach a computer to log in only the alternative "thinkmac", which is the school's administrator account, I presume. Some other computers have the alternative "thinkmac" as well as "Others..." where you can input your own login credentials. One day as I sat down by a computer and there was only the "thinkmac" alternative. I was about to get up and find another one when the guy sitting next to me says - Just click 'thinkmac' - the computer will ask for your password - then hit escape to get back to the login screen. Repeat until "Others..." appear. So: If you click any user account, hit ESC to get taken back to the login screen, repeat for 5-10x, eventually the alternative "Others..." will appear. Why is this? Is there an internal counter that keeps track on how many times you have clicked a/any given user account, and after a certain threshold it displays the "Others"? What is the logical reasoning behind this?

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • How this could happen to my ftp server?

    - by srisar
    hi, again me, i just dont get why i m getting this message, when i try to connect my ftp server thorugh my wan address (112.135.26.115) its saying me, 530 user access denied but when i give the same data to http://ftptest.net the result is as follows... Status: Resolving address of 112.135.26.115 Status: Connecting to 112.135.26.115 Status: Connected, waiting for welcome message Reply: 220-FileZilla Server version 0.9.34 beta Reply: 220-written by Tim Kosse ([email protected]) Reply: 220 Please visit http://sourceforge.net/projects/filezilla/ Status: CLNT http://ftptest.net on behalf of 112.135.26.115 Reply: 200 Don't care Status: USER saravana Reply: 331 Password required for saravana Status: PASS ********* Reply: 230 Logged on Status: SYST Reply: 215 UNIX emulated by FileZilla Status: FEAT Reply: 211-Features: Reply: MDTM Reply: REST STREAM Reply: SIZE Reply: MLST type*;size*;modify*; Reply: MLSD Reply: AUTH SSL Reply: AUTH TLS Reply: UTF8 Reply: CLNT Reply: MFMT Reply: 211 End Status: PWD Reply: 257 "/" is current directory. Status: Current path is / Status: TYPE I Reply: 200 Type set to I Status: PASV Reply: 227 Entering Passive Mode (112,135,26,115,43,9) Status: MLSD Reply: 150 Connection accepted Listing: type=dir;modify=20100322113235; it_is_working!_Andrejs_Cainikovs_from_serverfault.com Listing: type=file;modify=20100322110559;size=5; New Text Document.txt Reply: 226 Transfer OK Status: Success can anyone say why this happens to me? please im trying the whole day!!

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • What to do before connecting Ubuntu Server to the internet for the first time?

    - by CodeMonkey
    I just finished installing Ubuntu Server 12.10 on an Asus Eee PC 1000H (to be used as a home server/sandbox) from USB. I installed this software during installation: OpenSSH server LAMP server Samba file server Virtual Machine host I won't use 2, 3 or 4 for a while though. Can/should I turn these off somehow? I have turned home directory encryption on. Security updates are installed automatically. I have chosen a strong password for the single user. I have never plugged in the internet cable so far. Before doing so I'd like to ask: What can/should I do/install to increase security before connecting to the internet? Firewall? Fail2ban? Users/Passwords? Encryption? Enable/Disable functionality? etc. I'm sorry if you get this question a lot. I've searched around quite a while, but it still feels like I might overlook something important.

    Read the article

  • Samba server NETBIOS name not resolving, WINS support not working

    - by Eric
    When I try to connect to my CentOS 6.2 x86_64 server's samba shares using address \\REPO (NETBIOS name of REPO), it times out and shows an error; if I do so directly via IP, it works fine. Furthermore, my server does not work correctly as a WINS server despite my samba settings being correct for it (see below for details). If I stop the iptables service, things work properly. I'm using this page as a reference for which ports to use: http://www.samba.org/samba/docs/server_security.html Specifically: UDP/137 - used by nmbd UDP/138 - used by nmbd TCP/139 - used by smbd TCP/445 - used by smbd I really really really want to keep the secure iptables design I have below but just fix this particular problem. SMB.CONF [global] netbios name = REPO workgroup = AWESOME security = user encrypt passwords = yes # Use the native linux password database #passdb backend = tdbsam # Be a WINS server wins support = yes # Make this server a master browser local master = yes preferred master = yes os level = 65 # Disable print support load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # Restrict who can access the shares hosts allow = 127.0.0. 10.1.1. [public] path = /mnt/repo/public create mode = 0640 directory mode = 0750 writable = yes valid users = mangs repoman IPTABLES CONFIGURE SCRIPT # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP #iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT #iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -p udp --dport 137 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 137 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp --dport 138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 445 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 445 -m state --state ESTABLISHED -j ACCEPT # Make these rules permanent service iptables save service iptables restart**strong text**

    Read the article

  • Windows 7 - Windows XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+ [closed]

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • SQL Server Installaion error 0x84B40000

    - by Kurtevich
    I have a problem installing SQL Server 2008 R2. Long time ago I had it installed, and then uninstalled. It was left in "Add/remove programs", but I didn't pay attention on that. I had 2005 installed. And now there is a need to install 2008. I removed 2005 and started installing 2008, but it says that space on C: is not enough. That's when I found out that "Add/remove programs" shows it occupying more than 4 gigabytes, though I used to uninstall it. So I click "Remove", it shows all those many screens and validations, shows that removal completed, but the size of Program Files folder is still more than 4 GB. I removed (from "Add\remove programs" everything that had "SQL Server" in it's name, but that main "SQL Server 2008" item is still there and still 4 GB and uninstalling does nothing. Because installation of SQL Server did not show existing instances, and I don't see any running services related to SQL server (well, almost any, more details in the end), I though that this folder contains just some leftover staff and data and deleted it manually. Then agreed to removing of the item in "Add/remove programs" and everything looks clean. Now every time I try to install SQL Server (even in the minimum configuration), I receive the following error: SQL Server Setup has encountered the following error: The specified credentials that were provided for the SQL Server service are not valid. To continue, provide a valid account and password for the SQL Server service. Error code 0x84B40000. What is this service mentioned here? This error looks like I'm trying to add features to existing server and it can't login. But the setup didn't ask me for any credentials, except one username that couldn't be changed. Here are the services shown that can be related, both disabled and pointing to non-existing executables: SQL Active Directory Helper Service SQL Full-text Filter Daemon Launcher (MSSQLSERVER) I understand that this must be because of my manual deletion, but is there a way to clean it up now?

    Read the article

  • How to enable synergy 24800 (or some other port) through firewalld

    - by ndasusers
    After upgrading to Fedora 18, Synergy, the keyboard sharing system was blocked by default. The culprit was firewalld, which happily ignored my previous settings made in the Fedora GUI, backed by iptables. ~]$ ps aux | grep firewall root 3222 0.0 1.2 22364 12336 ? Ss 18:17 0:00 /usr/bin/python /usr/sbin/firewalld --nofork david 3783 0.0 0.0 4788 808 pts/0 S+ 20:08 0:00 grep --color=auto firewall ~]$ Ok, so how to get around this? I did sudo killall firealld for several weeks, but that got annoying every time I rebooted. It was time to look for some clues. There were several one liners, but they did not work for me. They kept spitting out the help text. For example: ~]$ sudo firewall-cmd --zone=internal --add --port=24800/tcp [sudo] password for auser: option --add not a unique prefix Also, posts that clamied this command worked also stated it was temporary, unable to survive a reboot. I ended up adding a file to the config directory to be loaded in on boot. Would anyone be able to have a look at that and see if I missed something? Though synergy works, when I run the list command, I get no result: ~]$ sudo firewall-cmd --zone=internal --list-services ipp-client mdns dhcpv6-client ssh samba-client ~]$ sudo firewall-cmd --zone=internal --list-ports ~]$

    Read the article

  • SSRS 2005 inaccessible after install

    - by Gabriel Guimarães
    Hi I've just installed SQL 2005 and Database Engine is ok, however I can't access it for nothing. When I go to http://localhost/reports I get this prompt for a username and password and it fails with 401.1. When I tried to disable kerberos on the virtual directories, nothing changed. I've tried changing the auth to anonymous and get: Internet Explorer cannot display the webpage. When I access from another machine, I get the prompt only once and get this error. Internet Explorer cannot display the webpage. Can't access this with IE or SSMS 2005. If I try to access with Management Studio i get this error: TITLE: Microsoft SQL Server Management Studio The underlying connection was closed: An unexpected error occurred on a receive. (Microsoft.SqlServer.Management.UI.RSClient) ADDITIONAL INFORMATION: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. (System) An existing connection was forcibly closed by the remote host (System) BUTTONS: OK By the way the server info: its a Win 2003 R2 Standard with IIS 6 Can't seem to understand this. Does anyone have a hint?

    Read the article

  • SFTP only works occasionally

    - by 82din
    I suddenly get this error using SFTP: Status: Connecting to example.com... Response: fzSftp started Command: open "[email protected]" 22 Command: Pass: ********* Status: Connected to example.com Status: Retrieving directory listing... Command: pwd Response: Current directory is: "/root" Command: ls Status: Listing directory /root Error: Connection timed out Error: Failed to retrieve directory listing I tried using FileZila, Cyberduck, Shell (Terminal), same result. However, it worked fine today (just a few seconds) in Passive mode. I guess something changed in my network, so I have tried both: Active and Passive mode: Connecting to probe.filezilla-project.org Response: 220 FZ router and firewall tester ready USER FileZilla Response: 331 Give any password. PASS 3.6.0.2 Response: 230 logged on. Checking for correct external IP address Retrieving external IP address from http://checkip.dyndns.org:8245/ Checking for correct external IP address IP <external IP> big-bf-ccc-f Response: 200 OK PREP 49565 Response: 200 Using port 49565, data token 380352881 PORT 186,15,222,5,193,157 Response: 200 PORT command successful LIST Response: 150 opening data connection Response: 503 Failure of data connection. Server sent unexpected reply. Connection closed Because I'm working behind a router, I get my external IP from http://checkip.dyndns.org:8245/ I also tested different range of ports.

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • Mac OS X printing to CUPS - More intuitive authentication failure?

    - by Moduspwnens
    We have a network-wide CUPS server that offers authenticated printer access to all our campus users. We've been pretty disappointed with the way Mac clients handle bad printing authentication, though. In any other authentication dialog, when a user types in a bad username or password, the window shakes briefly, allowing the user to re-enter. With printers, this isn't the case. It'll happily accept (and even save to the keychain, if specified) bad credentials. The authentication dialog is dismissed, and the user then has to deal with the print jobs showing up as "On hold (authentication required)". To get their job printed, they need to select it in the printer's queue, click "Resume", then re-enter appropriate credentials. Is there a way to get failed printing authentication to work more intuitively for Mac OS X clients? We're trying to support a BYOD environment, but our end users have been really confused by this. It's made even worse by the way it pre-populates the user's full login name (e.g. "Smith, John"), which tends to make them think to use their local machine passwords.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >