Search Results

Search found 6833 results on 274 pages for 'pure managed'.

Page 203/274 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Combining AD permissions with FTP

    - by user64204
    We're using Windows Server 2008 with Active Directory controlling access to a network share. We've setup FTP so that people can access that share from outside (we used to use the PPTP VPN but for various reasons we need to switch to FTP). So far here is what we've managed to implement on the FTP: -The network share is used as the FTP root (defined as a UNC) and that is working fine. -AD authentication is working fine (wrong password and you stay out, good password you're in, password management in AD correctly synched with the FTP). -AD permissions are failing: the AD permissions on the content of the FTP root are ignored: it's either a user only has read or write access, but this applies to the whole FTP root, which obviously isn't suitable since that FTP root is initially our network share and files/folders have different AD permissions depending on people's groups... Whether we set the permissions through the share OR the FTP management interface, AD permissions are never enforced. Q1: Is that normal? Q2: If so what solutions exist to combine AD permissions with FTP on MS server 2008? Q3: If not, where should I look to fix the configuration?

    Read the article

  • ldap-authentication without sambaSamAccount on linux smb/cifs server (e.g. samba)

    - by umlaeute
    i'm currently running samba-3.5.6 on a debian/wheezy host to act as the fileserver for our department's w32-clients. authentication is done via OpenLDAP, where each user-dn has an objectclass:sambaSamAccount that holds the smb-credentials and an objectclass:shadowAccount/posixAccount for "ordinary" authentication (e.g. pam, apache,...) now we would like to dump our department's user-db, and instead use authenticate against the user-db of our upstream-organisation. these user-accounts are managed in a novell-edirectory, which i can already use to authenticate using pam (e.g. for ssh-logins; on another host). our upstream organisation provides smb/cifs based access (via some novell service) to some directories, which i can access from my linux client via smbclient. what i currently don't manage to do is to use the upstream-ldap (the eDirectory) to authenticate our institution's samba: i configured my samba-server to auth against the upstream ldap server: passdb backend = ldapsam:ldaps://ldap.example.com but when i try to authenticate a user, i get: $ smbclient -U USER \\\\SMBSERVER\\test Enter USER's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.6] tree connect failed: NT_STATUS_ACCESS_DENIED the logfiles show: [2012/10/02 09:53:47.692987, 0] passdb/secrets.c:350(fetch_ldap_pw) fetch_ldap_pw: neither ldap secret retrieved! [2012/10/02 09:53:47.693131, 0] lib/smbldap.c:1180(smbldap_connect_system) ldap_connect_system: Failed to retrieve password from secrets.tdb i see two problems i'm having: i don't have any administrator password for the upstream ldap (and most likely, they won't give me one). i only want to authenticate my users, write-access is not needed at all. can i go away with that? the upstream ldap does not have any samba-related attributes in the db. i was under the impression, that for samba to authenticate, those attributes are required, as smb/cifs uses some trivial hashing which is not compatible with the usual posixAccount hashes. is there a way for my department's samba server to authenticate against such an ldap server?

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • Mac - KeyRemap4MacBook - Custom XML

    - by DjRikyx
    I hope someone of you know this powerfull Pref Panel to remap the keyboard.. Since i'm using a PC Keyboard, i wanted to make Screenshot a bit easy to do.. I managed to get working: -Command+Shift+3 to Stamp (full screen screenshot) -Command+Shift+4 to Control+Stamp (Selection Cursor Screenshot) Now i want to remap Shift+Stamp to Command+Shift+4+Space to get Windows Screenshot.. i tried but nothing was working.. Here is my current XML.. i only need to add the last remap! <?xml version="1.0"?> <root> <item> <name>Custom PC Style Screenshot</name> <appendix>Command+Shift_L+4+Space to Shift+F13</appendix> <appendix>Command+Shift_L+4 to Control+F13</appendix> <appendix>Command+Shift_L+3 to F13</appendix> <identifier>private.custom_pc_style_screenshot</identifier> <autogen></autogen> <autogen>--KeyToKey-- KeyCode::F13, VK_CONTROL, KeyCode::KEY_4, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen> <autogen>--KeyToKey-- KeyCode::F13, KeyCode::KEY_3, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen> Hope someone of you can help me out! Thank you

    Read the article

  • Western Digital HDD now freezes my BIOS when loading the video card!

    - by Vercas
    After I have successively and successfully installed 3 Windows XP's on the same partition (Don't ask why...), I restarted my computer again and the BIOS just froze when it loaded the video card. Together with my uncle, we tracked down the problem and found out it's the HDD's fault. We tried booting without the HDD and it worked! (No other HDD (I have only one) but with a Ubuntu Live CD in.) We tried the HDD with a different data bus (It was from an identical computer) but that one didn't let my BIOS recognize the HDD. We also put the HDD in another computer as the second HDD and it DID recognize it but Windows XP kept saying it cannot install a driver and that it installed successfully. Happily, I have managed to backup some of my most important files in that other computer. The following is a list of tests that we have run. With the HDD Original data bus Original computer Result: BIOS freezes WITHOUT the HDD Original data bus Original computer Result: Everything works just fine! With the HDD ANOTHER data bus Original computer Result: Cannot see the HDD With the HDD Original data bus ANOTHER computer Result: It worked! With the HDD ANOTHER data bus ANOTHER computer Result: It worked! During the tests, we had only two data buses and two computers. (each data bus from it's own computer) Strange thing is that the second data bus cannot let the BIOS see the HDD in my computer but works just fine with the other computer. I beg you to help me! I have my most important data on that HDD and I really cannot afford to buy another decent IDE HDD now!

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface? [migrated]

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • Installing ionCube on Windows 2008 64bit

    - by JoJo
    I've been trying to install ionCube onto my server but have not been having a lot of luck! My server is: Windows 2008 64bit PHP 5.3.14 Thread Safe disabled running as FastCGI In my PHP.ini I have : zend_extension="C:\Program Files (x86)\PHP\ext\ioncube_loader_win_5.2.dll" The path is correct. The DLL is from the x86 NONTS VC9 version of ionCube and PHP is using the MSVC9 (Visual C++ 2008) compiler though I have also tried using the x86 NONTS VC6 version of ionCube. I'm not getting any error but I'm also not getting ionCube when using phpinfo(): This program makes use of the Zend Scripting Language Engine: Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies Apart from the mail application pools I have also set all the applications in the application pool in IIS7 to use 32bit mode. I don't know if FastCGI is running under 64 or 32 bit mode nor how to switch it or whether it would make a difference? I know it can be a problem installing ionCube onto 64bit Windows but I have also come across threads whereby other people have also [somehow] managed to get it working but even though I seem to be doing the same as them I still can't get it working.

    Read the article

  • How does voltage fluctuation affect a laptop

    - by user45535
    I own an Acer Aspire 3680. In the past 2 years, I have given, my laptop for service due to the same kind of problem. Last year at the same time, I gave my laptop to the service center. They repaired it and I managed to ask them this question: Why does this problem happen. How can I prevent it? They said, that due to voltage fluctuations, something has gone wrong in your laptop. They actually were hesistant in mentioning as to which part they changed/repaired. Now recently the same problem happened and again I gave it to a service center to get it repaired and they also said, this is caused due to voltage fluctuation. My Question is: Due to Voltage fluctuation which part of the laptop can get affected. How can I prevent this from happening. Laptop Symptoms before I gave to service: When I plug in the adapter for charging the Green Light blinks and the laptop doesn't turn on. Neither does the charging indicator indicates the light, while charging.

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • Assigning IPs to OpenVZ containers

    - by Vojtech
    I have recently bought myself a physical server and I am trying to create containers which would have their IPs. The physical machine has both IPv4 and IPv6 addresses. I have accessible another IPv4 and some other IPv6 addresses which I would like to assign to the container. I managed to assign the addresses as follows: # vzctl set 101 --ipadd 144.76.195.252 --save I can ping to the machine from the physical machine, but not from the outside world. This also applies to the IPv6 I assigned as well. This is ifconfig of the physical machine: eth0 Link encap:Ethernet HWaddr d4:3d:7e:ec:e0:04 inet addr:144.76.195.232 Bcast:144.76.195.255 Mask:255.255.255.224 inet6 addr: 2a01:4f8:200:71e7::2/64 Scope:Global inet6 addr: fe80::d63d:7eff:feec:e004/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217895 errors:0 dropped:0 overruns:0 frame:0 TX packets:16779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:322481419 (307.5 MiB) TX bytes:1672628 (1.5 MiB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) This is ifconfig of the OpenVZ container: # ifconfig venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 2a01:4f8:200:71e7::3/64 Scope:Global UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:144.76.195.252 P-t-P:144.76.195.252 Bcast:144.76.195.252 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 What do I need to do to have the container accessible from the outside world? What could I have forgotten? Thanks.

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • Fixing damaged partition table

    - by dr4cul4
    This is continuation of Recover Extended Partition , but this time I have different problem related partition table it self. I managed to restore partition that I needed and backed up files that were crucial to me (at least those that I had space to store somewhere) OK now get to the problem. My partition table is corrupted, booting RIP Linux I can mount it in truecrypt (and other ones that recovered), but that's basically it. When I launch GParted I have unallocated drive. GParted Dev info: Device Information Model: ATA ST2000DL003-9VT1 Size: 1.82TiB Path: /dev/sda Partition table: unrecognized Heads: 255 Sectors/track: 63 Cylinders: 243201 Total Sectors: 3907029168 Sector size: 512 When I check information on unallocated space I get: File system: unallocated Size: 1.82TiB First sector: 0 Last sector: 3907029167 Total sectors: 3907029168 Warning: Can't have a partition outside the disk! Now the output of testdisc (Analyze): TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Partition Start End Size in sectors > 1 P Linux 13132 242 39 16353 233 8 51744768 2 E extended LBA 16807 223 1 243201 254 63 3637021626 No partition is bootable 5 L Linux 16807 223 57 20430 39 25 58191872 X extended 20430 70 1 243201 78 13 3578816632 Invalid NTFS or EXFAT boot 6 L HPFS - NTFS 20430 71 58 243201 78 13 3578816512 6 LNext Now fdisk: # fdisk -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00039cd0 Device Boot Start End Blocks Id System /dev/sda1 210980864 262725631 25872384 83 Linux /dev/sda2 270018504 3907040129 1818510813 f W95 Ext'd (LBA) /dev/sda5 270018560 328210431 29095936 83 Linux /dev/sda6 328212480 3907028991 1789408256 7 HPFS/NTFS/exFAT Now I would like to fix that to arrange partitions correctly, but I have no idea which tool is capable of fixing that (tried, a few, some of them offered fixing, but it was to risky at the moment - still backing up data).

    Read the article

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

  • Can you see something wrong in my .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • FreeBSD jail IMAP/MTA config recommendations

    - by kobame
    I've got access to my "own" FreeBSD jail. The jail has only basic, unconfigured system, but I have full access to FreeBSD ports, and (jail)root too. Now I need to setup my jail as IMAP/MTA. The question: What packages are EASIEST for config and later administration, (the simplest possible setup, with the minimum needed configuration) when: i haven't any preferences (don't know any yet) my (one) domain is managed by ISP, so don't need DNS need only IMAP for few users (up to 20 mailboxes) need secure transport layer (IMAPS/993) password auth, no LDAP, no kerberos, nor databases, nothing like fancy things... need easy-setup easy-admin MTA, with simplest possible password SMTP auth, (again no LDAP, nor DB), secure transport layer but would be nice have virus-scan and some anti-spam protection So, what ports I should install for MTA and IMAP? MTA (Sendmail, Postfix, Exim)? antivirus (ClamAV) antispam??? IMAP(S), (Dovecot, Courier) when the main criteria are: easy setup, and easy administration. When I googled I found only complicated setups for thousands of users with LDAP, databases and so on - too big-caliber for my small (easy?) needs. Any pointer to an easy howto is very welcomed.

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • Getting dwl-g122 to work on ubuntu

    - by User1
    I have a USB WiFi adapter, D-Link dwl-g122. I'm running Ubuntu 10.4. My laptop has a built-in wireless card that is connecting fine to the router. I plug in the usb and it never really connects. Here are some details: iwconfig wlan1 IEEE 802.11bg ESSID:"\x0B\xE1..." Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on lshw -c network: *-network:1 description: Wireless interface physical id: 3 logical name: wlan1 serial: 00:13:46:8b:xx:xx capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg dmesg [ 1096.814176] wlan1: direct probe to AP xxx (try 1) [ 1096.820960] wlan1: direct probe responded [ 1096.820969] wlan1: authenticate with AP xxx (try 1) [ 1096.823790] wlan1: authenticated [ 1096.823869] wlan1: associate with AP xxx (try 1) [ 1096.827667] wlan1: RX AssocResp from xxx (capab=0x411 status=0 aid=1) [ 1096.827674] wlan1: associated [ 1142.590912] wlan1: deauthenticating from xxx by local choice (reason=3) lsmod|rt2 rt2500usb 19643 0 rt2x00usb 11260 1 rt2500usb rt2x00lib 32133 2 rt2500usb,rt2x00usb mac80211 238896 3 ath5k,rt2x00usb,rt2x00lib cfg80211 148725 4 ath5k,ath,rt2x00lib,mac80211 led_class 3764 3 ath5k,rt2x00lib,sdhci It looks like the driver loads but it doesn't feel like connecting. The behavior is identical even if I blacklist the other wifi card (using an ath5k driver). It's almost like it is using the wrong password or something. Does anyone know what is happening? Is anyone using Ubuntu successful?

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >