Search Results

Search found 46416 results on 1857 pages for 'access log'.

Page 369/1857 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • localhost name error with linux machines

    - by coderex
    Hi, CASE 1: I have a Ubuntu machine with name midhun.local I can access this in http://midhun.local/svn ... But its can't access from other machines(both Windows and Linux) through this host name. But it works with http://192.168.1.192/svn CASE 2: I have a another machine(windows) having the host-name myname:555 In this case i can access https://myname:555/svn from other windows machines with the same URL. But if am trying to access from the a Linux machine it will not work with the same URL instead of that https://192.1.168.111:555/svn will work. How can I solve the problem. I need to access via the same name from cross domain. How is it possible in LAN Thanks in advance!!

    Read the article

  • Rsync plugin to many local wordpress installs via script or cli

    - by Nick Abbey
    I am maintaining a large number of wordpress installs on a production server, and we are looking to deploy InfiniteWP for managing these installs. I am looking for a way to script the distribution of the plugin folder to all of these installs. On server wp-prod, all sites are stored in /srv//site/ The plugin needs to be copied from ~/iws-plugin to /srv//site/wp-content/plugins/ Here's some pseudo code to explain what I need to do: array dirs = <all folders in /srv> for each d in dirs if exits "/srv/d/site/wp-content/plugins" rsync -avzh --log-file=~/d.log ~/plugin_base_folder /srv/d/site/wp-content/plugins/ else touch d.log echo 'plugin folder for "d" not found' >> ~/d.log end end I just don't know how to make it happen from the cli or via bash. I can (and will) tinker with a bash or ruby script on my test server, but I'm thinking the command-line-fu here on SF is strong enough to handle this issue much more quickly than I can hack together a solution. Thanks!

    Read the article

  • LDAP + LTSP 12.04

    - by us3r
    On ubuntu 12.04 i have some kind of problem with LTSP and LDAP. Sometimes I can log to the server, but sometimes I cant (window freezes on LDM) from thin client. Everything is ok when I log to the server like the local machine, but I have some kind of problem on thin client. pam_mkhomedir.so creates home dir, but i cant log..because Nothing happened - ldm freezes. This problem doesnt exist for "local" users (unix accounts) and on first logged LDAP user. It's important to mention that in log I can see nothing special. Does anybody have a problem with ltsp + ldap on ubuntu 12.04? There wasn't any problem on the previous versions. ps sorry for my english skills ;) EDIT: When LDM freezes in the logs there is something: May 17 11:59:52 bar sshd[6066]: Accepted password for student2 from 192.168.100.22 port 44000 ssh2 May 17 11:59:52 bar sshd[6066]: pam_unix(sshd:session): session opened for user student2 by (uid=0) May 17 12:00:03 bar sshd[6315]: subsystem request for sftp by user student2 And nothing other for this user.

    Read the article

  • Tail the filename, not the file

    - by Craig Walker
    In UNIX (OS X BSD to be precise), I have a "tail -f" command on a log file. From time to time I want to delete this log file so I can more easily review it in my text editor. I delete the file, and then my program recreates it after new activity. However, my tail command (and anything else that was watching the old log file) doesn't update; it's still watching the old, deleted log file. I think I understand why this is (file names simply being pointers to blocks of file data). I'd like to know how I can work around this. Ideally, my tail command (and anything else I point to the file) would be able to read the data from the new file when the file name has been deleted and recreated. How would I do this?

    Read the article

  • Installing nvidia drivers causes computer to boot to command prompt.

    - by levesque
    Hi, I have an Asus u30jc laptop, which comes with the Optimus prime graphics card switching technology that is now supported under 2.6.35, so I decided to give it a try. First I made sure the discrete graphics card was activated and then I installed the drivers proposed by the ubuntu software repository (nvidia-current). However, after rebooting all I got was a command prompt. My graphics card is a nvidia 310M. This is on Ubuntu 10.10 64 bits. What can I do to diagnose/identify the source of this problem? UPDATE: The messsages in my syslog tell me to check the xorg log: Oct 11 12:42:59 u30jc-test gdm-binary[1095]: WARNING: GdmDisplay: display lasted 0.053144 seconds Oct 11 12:42:59 u30jc-test gdm-simple-slave[1450]: WARNING: Unable to load file '/etc/gdm/custom.conf': No such file or directory Oct 11 12:42:59 u30jc-test gdm-binary[1095]: WARNING: GdmDisplay: display lasted 0.038176 seconds Oct 11 12:42:59 u30jc-test gdm-binary[1095]: WARNING: GdmLocalDisplayFactory: maximum number of X display failures reached: check X server log for errors Which I did. I found this message in my /var/log/Xorg.0.log : Fatal server error: [ 113.540] no screens found [ 113.540] What does that mean?

    Read the article

  • Cisco ASA 5505 (8.05): asymmetrical group-policy filter on an L2L IPSec tunnel

    - by gravyface
    I'm trying to find a way to setup a bi-directional L2L IPSec tunnel, but with differing group-policy filter ACLs for both sides. I have the following filter ACL setup, applied, and working on my tunnel-group: access-list ACME_FILTER extended permit tcp host 10.0.0.254 host 192.168.0.20 eq 22 access-list ACME_FILTER extended permit icmp host 10.0.0.254 host 192.168.0.20 According to the docs, VPN filters are bi-directional, you always specify the remote host first (10.0.0.254), followed by the local host and (optionally) port number, as per the documentation. However, I do not want the remote host to be able to access my local host's TCP port 22 (SSH) because there's no requirement to do so -- there's only a requirement for my host to access the remote host's SFTP server, not vice-versa. But since these filter ACLs are bidirectional, line 1 is also permitting the remote host to access my host's SSH Server. The documentation I'm reading doesn't seem to clear to me if this is possible; help/clarification much appreciated.

    Read the article

  • Using mod_rewrite to shutdown website.

    - by moolagain
    Hi, I am trying to shutdown a website to everyone except my ip address. I almost have it working. I cannot access www.mysite.com, but I can access all folders that have another .htaccess file in them. I have a .htaccess file in /www with the following code: #Use this when website is down RewriteEngine on #this allows access through my ip RewriteCond %{REMOTE_ADDR} !^(66\.777\.888\.99)$ RewriteRule !down.php$ /down.php [L] Some folders in my site have .htaccess files in them. If I have a file with the line: RewriteEngine on I can still access the folder. For example, if I have the second .htaccess file in /www/about, then I can still access mysite.com/about (but the .css file included on that page actually loads down.php). If I delete "RewriteEngine on" I get redirected to down.php. Any ideas? I think my mod_rewrite gets confused with multiple .htaccess files. Thanks!

    Read the article

  • Drupal + Lighttpd: enabling clean urls (rewriting)

    - by Patrick
    I'm emulating Ubuntu on my mac, and I use it as a server. I've installed lighttpd + Drupal and the following configuration section requires a domain name in order to make clean urls to work. Since I'm using a local server I don't have a domain name and I was wondering how to make it work given the fact the ip of the local machine is usually changing. thanks $HTTP["host"] =~ "(^|\.)mywebsite\.com" { server.document-root = "/var/www/sites/mywebsite" server.errorlog = "/var/log/lighttpd/mywebsite/error.log" server.name = "mywebsite.com" accesslog.filename = "/var/log/lighttpd/mywebsite/access.log" include_shell "./drupal-lua-conf.sh mywebsite.com" url.access-deny += ( "~", ".inc", ".engine", ".install", ".info", ".module", ".sh", "sql", ".theme", ".tpl.php", ".xtmpl", "Entries", "Repository", "Root" ) # "Fix" for Drupal SA-2006-006, requires lighttpd 1.4.13 or above # Only serve .php files of the drupal base directory $HTTP["url"] =~ "^/.*/.*\.php$" { fastcgi.server = () url.access-deny = ("") } magnet.attract-physical-path-to = ("/etc/lighttpd/drupal-lua-scripts/p-.lua") }

    Read the article

  • Force failover a Cisco ASA

    - by user974896
    I have two ASA in a lan state primary\secondary configuration. None of them have "failover active" or "no failover active" in their configuration. Would it be proper to failover in a manner such as: Log into console of primary unit and issue "failover lan state secondary", log into the console of the original secondary unit and issue "failover lan state primary". To fail back simply reverse the process or Log into the console of the primary unit and issue "no failover active", log into the console of the original secondary unit and issue "failover active". To fail back issue "failover active" on the original primary (now secondary) unit, and "no failover active" on the now primary unit. I do not like the second method because it adds configuration directives that were not in place before. Will the first method work?

    Read the article

  • Logging another person off in Windows 7 using Task Manager

    - by BBlake
    Under WinXP, I could use Task Manager's Users tab to log off my wife's account which she always leaves logged in so I don't have to log in to her account and log it out. It's an older machine so I used that trick to free up every resource I could which might potentially slow down the game I'm playing at the time. I recently upgraded the machine to Win7 and when I try the same trick, I get an access denied popup. My logged in account does have Admin rights, so is it as simple as runing Task Manager "as an Administrator" in order to allow this? If so, how can I pull up Task Manager (other than the standard CTRL-ALT-DELETE) to have it pop up with Admin rights in order to log her account off in this manner?

    Read the article

  • Suggestions for Troubleshooting WIndows 7 lockups

    - by Craig L
    I've got a Dell Latitude E6500 that was working fine under Vista x64. I got one of the new Seagate 500GB Hybrid SSD/HD 2.5 drives and thought.. hmm.. let's try Win 7 x64 on it. Bottom line: It works great for hours and then it will hard lock. I don't mean BSOD (or whatever the Win7 equivalent is). I mean my screen is displaying a static image (if there is a clock displayed, it will be frozen at the time the lockup occurred) and the mouse and keyboard do not work. Control-Alt-Delete will not work. I have to hold down the power button to reboot. The event log records NOTHING at the time the lockup occurs. Obviously something is happening to the system to cause the lockup, but the default Windows 7 x64 doesn't log it. How can I log the things Windows doesn't normally log in Event Viewer ?

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • Schedule of Password Expiration to a specific time

    - by elcool
    Is there a way in Windows Server 2003 or 2008 and in Active Directory, to specify in a policy that when a users password expires that day, to have it expire at a certain time, say 4:00am. The issue came up, because the expiration occurs during the middle of the working day, say 9:00am. Then when a user is already logged into Windows in the network, and using different applications, those will start behaving wrongly because of authentication. They have to log out and log back in, in order for Windows to ask for the new password. So, if when they log in early in the morning it would ask for the new password, then they won't have to log back out during the working day. One of the AD Admins said: "Have them check if their password will expire before starting the day".. but really, who does that? And I don't have access to an AD to check these types of policies. So, is this possible?

    Read the article

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Book Review&ndash;Getting Started With OAuth 2.0

    - by Lori Lalonde
    Getting Started With OAuth 2.0, by Ryan Boyd, provides an introduction to the latest version of the OAuth protocol. The author starts off by exploring the origins of OAuth, along with its importance, and why developers should care about it. The bulk of this book involves a discussion of the various authorization flows that developers will need to consider when developing applications that will incorporate OAuth to manage user access and authorization. The author explains in detail which flow is appropriate to use based on the application being developed, as well as how to implement each type with step-by-step examples. Note that the examples in the book are focused on the Google and Facebook APIs. Personally, I would have liked to see some examples with the Twitter API as well. In addition to that, the author also discusses security considerations, error handling (what is returned if the access request fails), and access tokens (when are access tokens refreshed, and how access can be revoked). This book provides a good starting point for those developers looking to understand what OAuth is and how they can leverage it within their own applications. The book wraps up with a list of tools and libraries that are available to further assist the developer in exploring the APIs supporting the OAuth specification. I highly recommend this book as a must-read for developers at all levels that have not yet been exposed to OAuth. The eBook format of this book was provided free through O'Reilly's Blogger Review program. This book can be purchased from the O'Reilly book store at: : http://shop.oreilly.com/product/0636920021810.do

    Read the article

  • Accessing C$ over LAN on Win2008R2 - cannot by hostname but can by IP and FQDN

    - by Idgoo
    Having an issue with one of our Win2k8 R2 file servers. Trying to access C$ or the Admin share is giving us an error (see error details that the bottom), however we are able to connect using the server's IP and FQDN. can access \\172.16.x.x\c$ with domain cred can access \\server.domain.local\c$ with domain creds cannot access \\servername\c$ with same domain creds Server pings fine with Hostname, IP, FQDN, the Primary DNS suffix is also correct. DNS, PTR and Wins records are all correct for the Server I have checked that I am not trying to connect with cached credentials in the Windows vault, the server is also appending primary and connection specific DNS suffixes to the hostname. Any ideas what might be causing this issue? Error Details: c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions

    Read the article

  • Benefits of Server-side Coding

    There are numerous advantages to server scripting languages over client side languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripting are scripts that are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9 years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server-side scripts and JavaScript used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid. However, I would have to say that my personal experience would have to sway me in deciding what type of languages to use for form validation because they both have advantages and disadvantages based on the each situation.

    Read the article

  • syslog ip ranges to specific files using `rsyslog`

    - by Mike Pennington
    I have many Cisco / JunOS routers and switches that send logs to my Debian server, which uses rsyslogd. How can I configure rsyslogd to send these router / switch logs to a specific file, based on their source IP address? I do not want to pollute general system logs with these entries. For instance: all routers in Chicago (source ip block: 172.17.25.0/24) to only log to /var/log/net/chicago. all routers in Dallas (source ip block 172.17.27.0/24) to only log to /var/log/net/dallas. Finally, these logs should be rotated daily for up to 30 days and compressed. NOTE: I am answering my own question

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • yum list installed including version of all installed packages CentOS 5.4

    - by Andy
    I have a list of packages installed with yum on CentOS 5.4 [root@server ~]# yum list installed ... Installed Packages GConf2.x86_64 2.14.0-9.el5 installed ImageMagick.x86_64 6.2.8.0-4.el5_1.1 installed MAKEDEV.x86_64 3.23-1.2 installed MySQL-python.x86_64 1.2.1-1 installed I would like to download these rpms locally using yumdownloader --resolve MySQL-python-1.2.1-1.x86_64 etc. However the package formatting is different (MySQL-python.x86_64 1.2.1-1 vs MySQL-python-1.2.1-1.x86_64) so I am unable to download them using the above command. I don't want to have to parse the output of yum list installed, and I also don't want to use the contents of /var/log/yum.log* as I'll have to account for erased packages and version discrepancies. However /var/log/yum.log* does have the formatting I require... May 25 14:58:15 Installed: groff-1.18.1.1-11.1.x86_64 May 25 14:58:15 Installed: bzip2-1.0.3-4.el5_2.x86_64 Any suggestions?

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • After closing the ssh terminal, the thin server is down

    - by Keating Wang
    I have a rails project run on the thin server(1.3.1) on a ubuntu server. I ssh to the server and start thin with command 'thin start -C config/thin.yml', following the thin.yml, port: 3000 log: log/thin.log timeout: 30 chdir: /home/byht/56platform/dev/tracker environment: production servers: 1 daemonize: true After thin starts successfully, I visit the project and it works well. Then, I close the terminal, I can also visit the pages that have been visited, but when I visit the pages that not been visited before closing ssh terminal, a "500" error appears on the page. I didn't find the error messages in the log file. I have tried start thin with nohup and sudo, but they are useless. I sign in the ubuntu server locally, then the problem disappears. But I need sign in the server to stat thin with ssh when I'm home.

    Read the article

  • Proper policy for user setup

    - by Dave Long
    I am still fairly new to linux hosting and am currently working on some policies for our production ubuntu servers. The servers are public facing webservers with ssh access from the public network and database servers with ssh access from the internal private network. We are a small hosting company so in the past with windows servers we used one user account and one password that each of us used internally. Anyone outside of the company who needed to access the server for FTP or anything else had their own user account. Is that okay to do in the linux world, or would most people recommend using individual accounts for each person who needs to access the server?

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >