Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 373/854 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Unable to get ejabberd prebind to work

    - by cdecker
    I'm trying to get the prebind of BOSH sessions to work. I want to be able to authenticate a user in my CMS and then log him in when he accesses the chat, for this I found https://github.com/smokeclouds/http_prebind, it all works find and I was able to compile it with the following steps: rake configure sed -i 's/AUTH_USER/a_user/g' src/http_prebind.erl sed -i 's/AUTH_PASSWORD/a_password/g' src/http_prebind.erl sed -i 's/EJABBERD_DOMAIN/jabber.my.tld/g' src/http_prebind.erl rake build rake install And then adding the http request bindings to the configuration: {5280, ejabberd_http, [ {request_handlers, [ {["http-prebind"], http_prebind} ]}, %%captcha, http_bind, http_poll, http_prebind, web_admin ]} ]}. As far as I understand it I should now be able to simply request a new session like this: curl -u a_user:a_password http://jabber.my.tld:5280/http-prebind/some_user But no matter what I always get Unauthorized as response. Any idea about this one? PS: I also tried Mod-Http-Pre-Bind, but as it does not require a password I would prefer to use http_prebind. PPS: Does the user with username AUTH_USER and password AUTH_PASSWORD actually have to exist? I'm currently using an admin account.

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Search behavior of Windows 7 start menu

    - by Kevin Ivarsen
    I'm coming to Windows 7 from XP, and there are aspects of the start menu search that I like. However, there are some behaviors that seem either inconsistent or surprising to me. For example: If I type "Pa" into the search bar, Paint is the first result (under the "Programs" heading), and it is selected for me. I can just hit Enter to start the program If I have a standalone exe "testing" on my desktop, and I type "test", the program comes up as the first item (under the "Files" heading), but it is not selected for me. I have to hit down-down-down-enter to open it from the keyboard. The same appears to be true for shortcuts and folders. What classifies something as a "Program" verses a "File"? Is there any way to configure the start menu so that the first search result is always selected? As a heavy keyboard user, it seems insane for the behavior to be inconsistent, and to require so many keypresses to select the top result. Also, are there resources that document the details, limitations, and tricks of the start menu search? (For example, a "Proc Exp" search will match "Process Explorer", but not "ProcessExplorer") EDIT: I've found that instead of hitting down-down-down to select the first item (when no Programs are in the list), you can just hit tab. This helps a bit, but the inconsistent behavior still makes this search feature more awkward and frustrating than necessary.

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • Preventing h/w RAID cards from dropping slow JBOD disks

    - by Kevin
    I'm considering buying a used SAS h/w RAID card for externally attaching HDDs to an HP ProLiant I'm setting up. However, I only require RAID functionality on some of the drives. Theoretically it should be simple to JBOD the other drives, but some of them are inexpensive SATA disks and probably cannot have TLER disabled. I'd like to know, prior to actually ordering a RAID card, whether typically RAID cards would still enforce dropping of disks that do not respond within a few seconds, even if the disk is in a JBOD, and whether there is any way to disable this. Ideally it would be nice to be able to select certain SAS ports that will be pass-through, bypassing the RAID engine entirely and just acting as an HBA for those ports. I know I could buy a separate SAS HBA but that seems like a waste of $ and is also impractical as it's a 1U server so space is extremely limited. My question then is whether the functionality I'm looking for (pass-through on certain ports or at least JBOD drives not getting themselves dropped due to slow response) is typical of proper h/w RAID cards such as PERC 5/E etc. I've browsed through the latter's manual but unfortunately, as with most user manuals, it states the obvious and doesn't state the unobvious. Thanks for any info, Kevin

    Read the article

  • Apache directory authorization bug (clicking cancel gives acces to partial content)

    - by s4uadmin
    I got a minor problem (as the site is not high priority) but still a very interesting one. I have an apache root domain wherein other sites live "/var/www/" And I have foo.example.com forwarding to "/var/www/foo-example" (wordpress site) The problem here is that when you go to foo.example.com you are prompted to enter credentials. If you hit cancel it gives you the access denied page. But when you go to the servers' direct IP (this gives you the default index page) and hit cancel when prompted for credentials it just keeps giving you the login screen, and after pressing cancel a few times more it gives (a perhaps cached) bare html part of the page. How do I prevent this from happening? Perhaps this is a bug... Even if I would block access to the root directory when going to the ip/foo-example it would still do this. And I want to keep all the directories within the www directory or at least all in the same. Thanks PS: here is my configuration: <VirtualHost *:80> DocumentRoot /var/www/wp-xxxxxxx/ ServerName beta.xxxxxxxxx.nl <Directory "/var/www/wp-xxxxxxxxx/"> Options +Indexes AuthName "xxxxxxxx Beta Site" AuthType Basic require valid-user Satisfy all AuthBasicProvider file AuthUserFile /var/www/wp-xxxxxxx/.htxxxxxxxxx order deny,allow allow from all </Directory> ServerAdmin [email protected] ServerAlias beta.xxxxxxx.nl </VirtualHost>

    Read the article

  • Is there a limit setting a php_admin_value in php-fpm?

    - by PeeHaa
    I am trying to set a large value in the configuration of a pool in php-fpm, but at some point it just doesn't start anymore. php_admin_value[disable_functions] = dl,exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,pcntl_exec,include,include_once,require,require_once,posix_mkfifo,posix_getlogin,posix_ttyname,getenv,get_current_use,proc_get_status,get_cfg_va,disk_free_space,disk_total_space,diskfreespace,getcwd,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_set,mail,proc_nice,proc_terminate,proc_close,pfsockopen,fsockopen,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,fopen,tmpfile,bzopen,gzopen,chgrp,chmod,chown,copy,file_put_contents,lchgrp,lchown,link,mkdi,move_uploaded_file,rename,rmdi,symlink,tempnam,touch,unlink,iptcembed,ftp_get,ftp_nb_get,file_exists,file_get_contents,file,fileatime,filectime,filegroup,fileinode,filemtime,fileowne,fileperms,filesize,filetype,glob,is_di,is_executable,is_file,is_link,is_readable,is_uploaded_file,is_writable,is_writeable,linkinfo,lstat,parse_ini_file,pathinfo,readfile,readlink,realpath,stat,gzfile,create_function When trying to restart php-fpm it fails with the following message: Stopping php-fpm: [ OK ] Starting php-fpm: [20-Oct-2013 22:31:52] ERROR: [/etc/php-fpm.d/codepad.conf:235] value is NULL for a ZEND_INI_PARSER_ENTRY [20-Oct-2013 22:31:52] ERROR: Unable to include /etc/php-fpm.d/codepad.conf from /etc/php-fpm.conf at line 235 [20-Oct-2013 22:31:52] ERROR: failed to load configuration file '/etc/php-fpm.conf' [20-Oct-2013 22:31:52] ERROR: FPM initialization failed [FAILED] When I remove the last disabled function (create_function) it start again. I also tried with other functions, but this gives the same error so it's not related to the create_function function. The string currently is just over 1KB in size so it looks like I have hit a limit here? Is my assumption correct? Is there a way to overcome this limit? I also tried to add another php_admin_value[disable_functions] underneath it (hoping it would be appended), but that didn't work (it just used the first one).

    Read the article

  • ACDSee alternatives for batch editing images

    - by Oxwivi
    I am looking for free, preferably open, alternatives to ACDSee for batch editing work. While I can do much of the work well on ACDSee, it's not entirely satisfactory despite having to pay for it. I need at least the following batch editing functions: Resize using either height or width and maintain aspect ration Auto contrast text overlays and occasionally, cropping oh, I make extensive use of renaming features as well Couple of issues with ACDSee are: I always need to highlight the Exposure section or auto contrast will not be done despite it being saved in the preset; and I can't define, move around the cropping box, forcing me to manually crop tons of images. I'm not an advanced, or "power photo-editor". I only require the basic stuff I described to be automated. My personal feature wish list (I'm pretty sure something so niche doesn't exist) would be text overlay based on the image names (images are named as image-1_1, image-1_2 or image-2_c1_1, image-2_c1_2, and text overlay would Image-1 and Image-2 C1 and Image-2 C2). I tried digiKam, but damn that thing is huge. It runs very slowly on my Pentium 4 and 1.5 GB RAM. On top of being a program with over 1 GB of files, the KDE library it uses is always slow regardless of it running on either Windows or Linux.

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • Deploying ASP.NET MVC to Windows Server 2003

    - by pete the pagan-gerbil
    Hi, I have a problem with an MVC 2 website on Windows Server 2003 running IIS 6. It is externally hosted, but we have a 2003 server internally for testing. The internal server runs the website fine, the external server gives a 403 ("website declined to show this page") error when navigating to the root of the site, and a 404 if I try to navigate directly to a page resource. I have tried the wildcard ISAPI mapping and extension mapping, and a couple of other common checks (I forget exactly which now, most of them were already set correctly), but so far no joy. All the settings can be replicated on our internal server and the pages return properly. IIS logs just show exactly what the browser shows - 404 errors and 403s. I've read about a different level of trust required for an MVC application compared to a WebForms application - how can I check permissions and trust levels on the external and internal servers (assuming I am able to check that) and if that would cause these errors, what are the minimum levels that MVC require? Failing that, what else might be causing this error for me to try out?

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • Running $ORIGIN linked binaries from setuid scripts on linux

    - by drscroogemcduck
    I'm using suidperl to run some programs that require root permissions. however, the runtime linker won't expand library paths which contain $ORIGIN entries so the programs i want to run (jstack from java) won't run. more info here There is one exception to the advice to make heavy use of $ORIGIN. The runtime linker will not expand tokens like $ORIGIN for secure (setuid) applications. This should not be a problem in the vast majority of cases. my program looks something like this: #!/usr/bin/perl $ENV{PATH} = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_12/bin:/root/bin"; $ENV{JAVA_HOME} = "/usr/java/jdk1.6.0_12"; open(FILE, '/var/run/kil.pid'); $pid = <FILE>; close(FILE); chomp($pid); if ($pid =~ /^(\d+)/) { $pid = $1; } else { die 'nopid'; } system( "/usr/java/jdk1.6.0_12/bin/jstack", "$pid"); is there any way to fork off a child process in a way so that the linker will work correctly.

    Read the article

  • Attaching 3.5" desktop drive to MacBook SATA

    - by Kyle Cronin
    I have a mid-2007 MacBook that, according to the Apple Store, has suffered some liquid damage and requires a new logic board to operate correctly, a ~$750 repair I've been told (would normally be around ~$300 were it not for the "liquid damage"). The unit itself works fine - the only problem I've been having is that the system does not recognize the battery and will not charge it. Curiously, the system can still be powered by the battery and even recognizes when the power cord is detached by diming the backlight, but I digress. Now that this laptop will likely become a desktop, I'm wondering if it might be possible to attach a desktop drive. I recently purchased a 2TB SATA drive and I'm wondering if it's possible to somehow attach it where the current internal drive connects. Obviously the drive itself will not fit inside the device, but as the unit will spend the rest of its days on my desk, that's not really much of an issue. My main questions are: Is this possible? If so, how would I connect the drive? Would a SATA extender cable work? Is the SATA port on my MacBook capable of powering a desktop drive? Or should I just get a SATA male-to-female cable and see if I can power the drive through other means (a cheap power supply, for example) The disk I'm referring to is the Hitachi Deskstar HD32000. Though I couldn't find that exact model on Hitachi's support site, these are the power requirements for a similar drive, the 7K2000 (2TB, 7200RPM, SATA II): Power Requirement +5 VDC (+/-5%) +12 VDC (+/-10%) Startup current (A, max.) 1.2 (+5V), 2.0 (+12V) Idle (W) 7.5 From what I've read, 2.5" drives require 5V, meaning that my MacBook obviously is capable of producing it. The specs seem to suggest that this drive seems capable of accepting it instead of the typical 12V - is this an accurate interpretation of the power requirements? Or does it need both 12V and 5V?

    Read the article

  • Setting up squid proxy server to in turn connect using another proxy server [closed]

    - by AnkurVj
    My institute uses the Squid proxy server and authentication mechanism requires username and password to be entered. This means that, I can log in on only one machine at a time and Internet access for me is restricted to that machine. I sometimes require Internet access on multiple machines simultaneously. What previosuly worked for me was the following : On one of my own machines A, I set up a Squid proxy server that allowed all local machines without any username and password. I configured rest of the machines to use this machine A as the proxy server. On machine A I logged into the institute proxy server using my browser. This way, I could access Internet from all my machines, by effectively channeling my requests through the server A. Recently, I lost that machine and configuration and now I tried to set it up again in the same manner. However, I cant seem to remember exactly how I made it work. I keep getting Connection Refused (111) on other machines. My guess is that my squid server isnt able to forward requests from other machines to the actual squid server. I could use any help for debugging this problem. I don't want to use alternatives such as ssh tunneling. This solution has worked for me in the past, I just don't remember how to set it up the same way again.

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • UPS power requirements for server

    - by captainentropy
    Greetings! So, I just placed an order for a new server. The company recommended that I get a 3000W UPS. (!) As best as I could I calculated the following wattage consumption based on benchmarked data or datasheets provided by the manufacturers of each component: number watts **total watts** MoBo 1 240 240 CPUs (E5540) 2 80 160 RAID cards (3ware) 2 18 36 RAM (6x4GB) 6 3 18 DVD drive 1 7 7 floppy 1 2 2 RE4 drives 8 7 56 WD20 drives 8 6 48 Intel X25 SSD 2 0.15 0.3 total = 567 So that is for the PSU requirements only. The PSUs in the machine are a 720W for the master node and 800W each for two subsystems. That's a total of 2320W that can be delivered by these PSUs. But that is 4X the amount being consumed, at most, by the components. I didn't count case fans or the eSATA card (3W maybe?) or what the PSUs themselves require but assuming I double or triple my calculations I'm not even remotely close to the 3000W UPS I was suggested to get. They run at least $1100. I could get a 2000W for about $750 or a 1500W for $450 and still be well over my estimated power need. I don't think I need a whole lot of run time in the case of a power outage, maybe 20 minutes max, enough time to shutdown if the power doesn't come on within 5-10 minutes. Any thoughts? Am I off on my calculations? Did I overlook something major? If so what are your suggestions for a UPS? Thanks!

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • rhel configure: limit root direct login to systems except through system consoles

    - by zhaojing
    I have to configure to limit root direct access except system consoles. That is, the ways of telnet, ftp, SSH are all prohibited. Root can only login through console. I understand that will require me to configure the file /etc/securetty. I have to comment all the tty, just keep "console" in /etc/securetty. But from google, I found many peoples said that configure /etc/securetty will not limit the way of SSH login. From my experiment, I found it is. (configure /etc/securetty won't limit SSH login). And I add one line in /etc/pam.d/system-auth: auth required pam_securetty It seems root SSH login can be prohibited. But I can't find the reason: What is the difference of configure pam_securetty and /etc/securetty? Can anyone help me with this? Only configure /etc/securetty could work? Or Have I to configure pam_securetty at the same time? Thanks a lot!

    Read the article

  • Best all in one linux based proxy,firewall, dhcp and wins server.

    - by BeStRaFe
    I help to run a lan in Sydney. We have a need for a proxy/gateway solution to allow those pesky games that require internet to work. I have been doing this with an ISA server and it has worked quite well. However now i wish to port this over to run on the same hardware as our cacti / nagios box under a vmware VM. ISA server is horridly nad due to the massive ram and i/o requirement for something is basically port blocking and handing out IP's. The needs are as follows. 1. DHCP 2. WINS (otherwise network devices fight over who is the WINS master) 3. Filtering based in PORT for outbound traffic. 4. Ability to whitelist IP/MAC's for internet access. 5. Web Interface. I had been thinking to use PFSENSE however there is no option for a WINS server and i cbf working my way around bsd.

    Read the article

  • Puppet variables best practice, generalise or specialise?

    - by Andrei Serdeliuc
    I'm trying to figure out which things should be in git within the puppet manifest and which should be in env vars like FACTER_my_var and use that in the manifest instead. Scenario: you are deploying 3 php apps and you've already built all the layers up to the app in other manifests (base system, php extensions, users, etc), and all that's left is installing the correct app (from an apt repo) and creating a vhost. I'm tempted to have something along the lines of: apache::vhost { $::project_hostname: priority => '10', port => '80', docroot => $::project_document_root, logroot => "/var/log/apache2/${$::project_name}", serveradmin => '[email protected]', require => Package[httpd], ssl => false, override => 'all', setenv => ["APP_KERNEL dev"] } This would run on each server, and the FACTER_project_* vars would be set on a per server basis. An obvious restriction of this would be that you can't run more than one app with this specific example. Or would you rather have project_x.pp, project_y.pp which have hardcoded paths and names?

    Read the article

  • Puppet : How to override / redefine outside child class (usecase and example detailled)

    - by alex8657
    The use case i try to illustrate is when to declare some item (eq mysqld service) with a default configuration that could be included on every node (class stripdown in the example, for basenode), and still be able to override this same item in some specific class (eg mysql::server), to be included by specific nodes (eg myserver.local) I illustrated this use case with the example below, where i want to disable mysql service on all nodes, but activate it on a specific node. But of course, Puppet parsing fails because the Service[mysql] is included twice. And of course, class mysql::server bears no relation to be a child of class stripdown Is there a way to override the Service["mysql"], or mark it as the main one, or whatever ? I was thinking about the virtual items and the realize function, but it only permits apply an item multiple times, not to redefine or override. # In stripdown.pp : class stripdown { service {"mysql": enable => "false", ensure => "stopped" } } # In mysql.pp : class mysql::server { service { mysqld: enable => true, ensure => running, hasrestart => true, hasstatus => true, path => "/etc/init.d/mysql", require => Package["mysql-server"], } } # Then nodes in nodes.pp : node basenode { include stripdown } node myserver.local inherits basenode { include mysql::server` # BOOM, fails here because of Service["mysql"] redefinition }

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >