Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 840/1086 | < Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • split virtualization design based on environment or server role?

    - by Dan
    I'm setting up the server environment for a new software development group, which will include 4 test environments. These are web applications, so each environment will have an application server and a database server. I'm planning on buying two physical servers (e.g. 6-core CPU each with 12GB or so of RAM), and I'm thinking virtualization is appropriate here. With that in mind, I've thought of a couple ways that I could organize the virtualization strategy: - Separated by server role: Server 1 has all the application servers, each in their own guest VM. Server 2 has all the databases. OR - Separated by environment: Server 1 has a VM for two of the environments, with the VM containing both the app server and the database server. Server 2 would also contain two test environments, with the same style (app server and database in same VM). The advantages I see with all the app servers on one server and all the databases on another server is that I could probably be more efficient with the database server (one instance running multiple databases). But the other option seems easier to manage (archives/restorations would be contained in a single VM). Any recommendations? TIA.

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • Reconfiguring PHP with OpenSSL Extension on CentOS

    - by Evan
    Hi Guys - Long time browser, first time poster! I have a CentOS Dedicated server running just fine. I'm trying to reconfigure PHP to include the OpenSSL extensions so I can use some of the Youtube API's. I installed OpenSSL with yum, so it's in place on the server. I'm just now having trouble getting PHP to use it as an extension. I got the latest PHP tarball, untarred, set my configure string (./configure) using the proper parameter for openssl (--with-openssl=/usr) and it checked out just fine. I ran Make, then Make Install. I am getting hung up here. After it makes the PEAR config file it seems to quit. I guess I'm not sure, but it seems like there is a LOT more that should be happening. Here is a screenshot: http://www.evanfell.com/screencaps/6iamks.png Restarting apache shows no change to the PHP running on the server. Is there are PEAR issue killing the Install process? Or is there an other issue? Thanks In Advance. Happy to clarify and provide more info.

    Read the article

  • Windows Photo Viewer can't open this picture because you don't have the correct permissions to access the file location

    - by Software Monkey
    My system in Windows 7 and fully up to date with all patches and options (except for Microsoft Silverlight, which I refuse to install). I get this error whenever I try to open an image using Windows Photo Viewer, such as when previewing from Explorer or when opening an image attachment to an email. I have already verified correct permissions to the file and all folders in the path. The strange thing is that every other program I have seems to open the images fine, including "Slideshow" from Windows Explorer. Even more strange, in WPV there is an "Open" menu that lists the other programs for images including GIMP and MS Paint and they open the very file that WPV is complaining about just fine. That should eliminate permissions as being the problem, especially since (logically at least) they are read/write while WPV is read-only. I have even edited and saved the images that WPV does not open. I am out of ideas, and searching for answer on the Web has resulted only in the same tired repitition of some flavor of "take ownership and reset permissions for the entire drive", which I have already done. And which is counter-indicated by the fact that only Windows Photo Viewer seems to have a problem. The one thing which is slightly unusual is that for normal files they are all on a second HDD mounted into C:, however for email attachments the temporary folder is C:\Temp\, which is directly on that drive.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Windows cannot open directory with too long name created by Linux

    - by Tim
    Hello! My laptop has two OSes: Windows 7 and Ubuntu 10.10. A partition of Windows 7 of format NTFS is mounted in Ubuntu. In Ubuntu, I created a directory under somehow deep path and with a long name for itself, specifically, the name for that directory is "a set of size-measurable subsets ie sigma algebra". Now in Windows, I cannot open the directory, which I guess is because of the name is too long, nor can I rename it. I was wondering if there is some way to access that directory under Windows? Better without changing the directory if possible, but will have to if necessary. Thanks and regards! Update: This is the output using "DIR /X" in cmd.exe, which does not shorten the directory name: F:\science\math\Foundations of mathematics\set theory\whether element of a set i s also a set\when element is set\when element sets are subsets of a universal se t\closed under some set operations\sigma algebra of sets>DIR /X Volume in drive F is Data Volume Serial Number is 0492-DD90 Directory of F:\science\math\Foundations of mathematics\set theory\whether elem ent of a set is also a set\when element is set\when element sets are subsets of a universal set\closed under some set operations\sigma algebra of sets 03/14/2011 10:43 AM <DIR> . 03/14/2011 10:43 AM <DIR> .. 03/08/2011 10:09 AM <DIR> a set of size-measurable sub sets ie sigma algebra 02/12/2011 04:08 AM <DIR> example 02/17/2011 12:30 PM <DIR> general 03/13/2011 02:28 PM <DIR> mapping from sigma algebra t o R or C i.e. measure 02/12/2011 04:10 AM <DIR> msbl mapping from general ms bl space to Borel msbl R or C 02/12/2011 04:10 AM 4,928 new file~ 03/14/2011 10:42 AM <DIR> temp 03/02/2011 10:58 AM <DIR> with Cartesian product of se ts 1 File(s) 4,928 bytes 9 Dir(s) 39,509,340,160 bytes free

    Read the article

  • Error related to pkg-config when installing frei0r as part of another package

    - by Anentropic
    I am trying to build https://github.com/mltframework/shotcut on OS X Lion (using their script in scripts/build_shotcut.sh) and after numerous hurdles I'm stuck on this error: ./configure: line 16062: syntax error near unexpected token `OPENCV,' ./configure: line 16062: `PKG_CHECK_MODULES(OPENCV, opencv >= 1.0.0, HAVE_OPENCV=true, true)' ERROR: Unable to configure frei0r From what I already googled this means that the PKG_CHECK_MODULES macro hasn't been defined, which probably means there's something wrong with my pkg-config, which I installed via Homebrew. Sounds like the pkg.m4 file isn't found. When I brew install pkg-config I get the following warning: Warning: m4 macros were installed to "share/aclocal". Homebrew does not append "/usr/local/share/aclocal" to "/usr/share/aclocal/dirlist". If an autoconf script you use requires these m4 macros, you'll need to add this path manually. Well I've appended that line to the dirlist file and it doesn't fix the problem above. Can anyone suggest a way forward here? I have briefly tried building my own pkg-config from source but (bizarrely) when I tried to ./configure I got the following error: checking for pkg-config... no ./configure: line 13540: --exists: command not found configure: error: pkg-config and glib-2.0 not found, please set GLIB_CFLAGS and GLIB_LIBS to the correct values if building pkg-config needs pkg-config it seems like a weird catch 22 situation... I think this is probably an unnecessary sidetrack anyway.

    Read the article

  • Extremely high mysqld CPU usage with no active queries

    - by RadarNyan
    I have a VPS running Ubuntu 12.04 LTS with LEMP stack, followed the guide from Linode Library (since I'm using a Linode) to setup, and everything worked fine until now. I don't know what's wrong, but my CPU usage just goes up since a week ago. Today things getting really bad - I got 74% CPU usage so I went check and found that mysqld taking too much CPU usage (somewhere around 30% ~ 80%) So I did some Google Search, tried disable InnoDB, restart mysql, reset ntp / system clock (Isn't this bug supposed to happen more than a year ago?!) and reboot my VPS, nothing helped. Even with mysql processlist empty, I still get mysqld CPU usage very high. I don't know what I missed and have totally no idea, any advice would be appreciated. Thanks in advance. Update: I got these from running "strace mysqld" write(2, "InnoDB: Unable to lock ./ibdata1"..., 44) = 44 write(2, "InnoDB: Check that you do not al"..., 115) = 115 select(0, NULL, NULL, NULL, {1, 0}^[[A^[[A) = 0 (Timeout) fcntl64(3, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}, 0xbfa496f8) = -1 EAGAIN (Resource temporarily unavailable) hum... I did tried to disable InnoDB and it didn't fix this problem. Any idea? Update2: # ps -e | grep mysqld 13099 ? 00:00:20 mysqld then use "strace -p 13099", the following lines appears repeatedly: fcntl64(12, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, NULL}, [2]) = 14 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(14, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, [30]) = 0 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0", 8) = 0 fcntl64(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0xb786a584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb786a580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb7869998, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=12, revents=POLLIN}]) er... now I totally don't get it x_x help

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • Squid3 not caching simple request and response

    - by Nick Spacek
    Hi folks, I've pared down my squid.conf to try to figure this out: http_port 80 accel defaultsite=host.to.cache cache_peer ip.to.cache parent 80 0 no-query originserver acl our_sites dstdomain host.to.cache http_access allow our_sites refresh_pattern . 1 20% 4320 Requests are being proxied correctly, so that's a start. Here's a request: GET http://host.to.cache/path?some_param=true Accept: */* Accept-Charset: ISO-8859-1,utf-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en Connection: keep-alive Host: host.to.cache User-Agent: myuseragent And the response: Connection: keep-alive Content-Length: 585 Content-Type: application/xml Date: Thu, 06 Jan 2011 18:33:11 GMT Via: 1.0 localhost (squid/3.0.STABLE19) X-Cache: MISS from localhost X-Cache-Lookup: MISS from localhost:80 The response has no caching-related headers, but I thought that refresh_pattern would set a default behavior for responses without caching-related headers. For my test, I wanted to cache everything for one minute at minimum. Am I missing something obvious? I did take a peek at this question: Squid isn't caching ...and ran through the page here: http://www.mnot.net/cache_docs/ briefly, but didn't see anything relevant (not to say that there isn't, I could have missed something). Thanks for any help.

    Read the article

  • Configuring dnsmasq to handle mx records on pfsense 2.0.1

    - by Bob B.
    I know from dnsmasq's man page that it is capable of handling mx records, but I can't seem to find anything in pfsense's web GUI or anywhere online that talks about how to include mx records. I'm running pfsense 2.0.1 on a turnkey hardware appliance. I have root shell access. I would prefer not to move away from using DNS Forwarder/dnsmasq if I can help it. I've searched for a dnsmasq.conf file, but none exists. pfsense handles everything through a centralized xml config file. That file merely designates the dnsmasq section using the tag, then drops immediate into listings for each host override you define. My understanding of pfsense's implementation: In the GUI, you can only define an override using the host, domain, IP and description. In the XML that translates to: <hosts> <host>foo</host> <domain>foo.com</domain> <ip>127.0.0.1</ip> <descr/> </hosts> The above example results in foo.foo.com resolving to 127.0.0.1, for instance. But that's it. No ability to select a record type with which to define things like MX. Anyone had any luck with this? Thank you for any insights you might have.

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • Exim4 Smart Host Relay

    - by ColinM
    I am running Exim 4.71. I want to: Route all email from A.com through mail.A.com Route all email from [B-E].com through mail.B.com Send all other email directly. Here is the configuration I have that doesn't work like I hoped: domainlist a_domains = a.com domainlist b_domains = b.com : c.com : d.com : e.com begin routers smart_route_a: driver = manualroute domains = +a_domains transport = remote_smtp route_list = +a_domains mail.a.com no_more smart_route_b: driver = manualroute domains = +b_domains transport = remote_smtp route_list = +b_domains mail.mollenhour.com no_more dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more When I send an email e.g. with PHP's mail() or Zend_Mail_Transport_Smtp setting both From: and Return-Path: as [email protected], the smart_route_a router is not used, the dnslookup is used instead. Disabling dnslookup results in no mail being sent. From the logs it appears that email sent to [email protected] uses smart_route_a, but the same email sent from [email protected] to [email protected] is sent using dnslookup. How do I make email from [email protected] be relayed via mail.a.com?

    Read the article

  • Network latency and speed of light

    - by James
    This was kinda of covered by the following Is minimum latency fixed by the speed of light? , but i would like to add the follow up a bit. The scenario is as follows; we have two opposing sites one on the West Coast of the US and one in Ireland. The customer is in central Europe, and has requested a latency test. Ireland gives responses of ~65-70ms. However the West Coast guys claim to be faster with a response of 60ms. Now a quick check says that light in fiber would take about 42ms to make the trip to the States and 8.5ms to Ireland. So obviously this is a single hop and does not include routers, switches, firewalls, protocol overhead etc. Would I be right to call BS on their figures? As a final note I tested a ping to Google IP address that was allegedly on the west coast from a site that covered a similar distance and was amazed to get a response time of 20ms. Suggesting ICMP packets that travel twice the speed of light. So A) what am I missing B) Am I right to suspect shenanigans? UPDATE: Guys thanks so far for your help and I have been reading various previous questions on this. About 5 years I had an issue where the hop from the UK to Ireland added 10ms of latency no matter what we did. In the end I moved the servers; So imagine my surprise when I have guys that claim they are 5ms faster with a transatlantic trip. So again should I call BS? Oh and assume both sites are normal mortals that don't have access to Google magical routing, warp dives or flux capacitors. :)

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • NGINX Configuration Error using Codex Example: Is This a Typo in Codex?

    - by jw60660
    I installed NGINX using this tutorial: C3M Digital NGINX Tuturial but after reading this article on security issues with "cut and paste" configuration tutorials: Neal Poole's article regarding security and NGINX configuration I decided to follow Poole's suggestion to use the configuration suggested in the WordPress codex: Codex on NGINX Configuration I used the Codex configuration for a multisite installation using W3 Total Cache. When attempting to start NGINX I get an error saying that the /etc/nginx/nginx.conf test failed. The error message was: "Restarting nginx: nginx: [emerg] unknown directive "//" in /etc/nginx/sites-enabled/teambrazil.com:18" When I looked at my site specific configuration at that path I noticed the rewrite rule in the server block was: rewrite ^ $scheme://teambrazil.conf$request_uri redirect; That line in the Codex example was: rewrite ^ $scheme://mysite.conf$request_uri redirect; That looked like a mistake to me, and I changed my line to: rewrite ^ $scheme://teambrazil.com$request_uri redirect; I then attempted to restart NGINX but got the same error message. My question is: is that a mistake, and is there anything more I have to do aside from restarting NGINX after making this change. As suggested by both tutorials I set up the directories: /etc/nginx/sites-enabled and /etc/nginx/sites-available and created the appropriate symbolic links using: touch /etc/nginx/sites-available/teambrazil.com ln -s /etc/nginx/sites-available/teambrazil.com /etc/nginx/sites-enabled/teambrazil.com Is there something else I need to consider after making this correction? Or was it not an error in the first place? I'm pretty stuck here. BTW, I am using Debian squeeze as an OS on Amerinoc's VPS. I'm just getting familiar with VPS administration and am pretty much a noob. Thanks very much, would appreciate any input.

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • Python module: Trouble Installing Bitarray 0.8.0 on Mac OSX 10.7.4

    - by Gabriele
    I'm new here! I have trouble installing bitarray (vers 0.8.0) on my Mac OSX 10.7.4. Thanks! ('gcc' does not seem to be the problem) Last login: Sun Sep 9 22:24:25 on ttys000 host-001:~ gabriele$ gcc -version i686-apple-darwin11-llvm-gcc-4.2: no input files host-001:~ gabriele$ Last login: Sun Sep 9 22:18:41 on ttys000 host-001:~ gabriele$ cd /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/bitarray-0.8.0/ host-001:bitarray-0.8.0 gabriele$ python2.7 setup.py installrunning install running bdist_egg running egg_info creating bitarray.egg-info writing bitarray.egg-info/PKG-INFO writing top-level names to bitarray.egg-info/top_level.txt writing dependency_links to bitarray.egg-info/dependency_links.txt writing manifest file 'bitarray.egg-info/SOURCES.txt' reading manifest file 'bitarray.egg-info/SOURCES.txt' writing manifest file 'bitarray.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py creating build creating build/lib.macosx-10.6-intel-2.7 creating build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/__init__.py -> build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/test_bitarray.py -> build/lib.macosx-10.6-intel-2.7/bitarray running build_ext building 'bitarray._bitarray' extension creating build/temp.macosx-10.6-intel-2.7 creating build/temp.macosx-10.6-intel-2.7/bitarray gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bitarray/_bitarray.c -o build/temp.macosx-10.6-intel-2.7/bitarray/_bitarray.o unable to execute gcc-4.2: No such file or directory error: command 'gcc-4.2' failed with exit status 1 host-001:bitarray-0.8.0 gabriele$

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

  • My Windows 7 has suddenly stopped displaying Unicode symbols

    - by Felix Dombek
    For some strange reason, my computer suddenly doesn't show certain unicode characters anymore! I have no idea what happened. Affected applications include Windows Explorer (should be Japanese characters), Google Chrome (should be a heart), and Winamp (should be stars): Russian, German etc. characters are displayed normally. Chrome also displays Japanese script on websites, but not in the GUI. How can I fix it? Update: I have tried to use System Restore to fix it. I needed to go back in time quite a while because the most recent restore points didn't solve it so I used one from the middle of November. After that restore, Unicode symbols were displayed again. Then I updated my system with Windows Update again because those were removed during the restore. After that, the error occurred again! I then did a restore to a point before my new updates, but the error persists, and the old restore point (which I used before) is gone and there are currently no other snapshots of the system. Any suggestions on what to do now? Update 2: I could find a workaround: Control Panel ? Region and Language ? Administration ? Change Language for Unicode-incompatible programs to Japanese (Japan). All mentioned programs display their symbols correctly again. However, I don't consider this a fix because these programs are not usually Unicode-incompatible, and it also leads to some (non-serious) artifacts in some programs. I still welcome an answer that tells me what went wrong here and how to fix the issue.

    Read the article

  • Administrator view all mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. ServerFault has an answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • Ruby on Rails cannot find Initializer?

    - by Ryan M.
    Hello, I am trying to deploy an app to a fresh Ubuntu 10 installation using Passenger 2.2.15, Rails 2.3.5, Ruby 1.8.7, and Apache 2.2.14. However, even with a default rails app (sudo rails defaultapp), I am receiving the following error: "no such file to load -- initializer". I'm not sure which files you might need copies of in order to diagnose this problem, so I'll copy a few here and hope that it will help. Thanks for any help you can provide. -RM /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/appname/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/mods-available/passenger.conf <IfModule passenger_module> PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15 PassengerRuby /usr/bin/ruby1.8 </IfModule> /etc/apache2/mods-available/passenger.load LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15/ext/apache2/mod_passenger.so

    Read the article

< Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >