Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 242/438 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • How to re-do the hard disks in a WD Word Book Edition II ?

    - by jfmessier
    I recently purchased a WD World Book II, a 2 TB one. I call it the "White Box". It has those 2 1TB drives, and they were in this RAID 1 config, only giving me about 1 TB. I could not delete the raid array, and I took the drives in a Linux box. But I also deleted the entire partitions of the disks, and I cannot even et the existing RAID array on this WD White Box. The drives are fine, but I cannot get them to work on the WD White Box. My goal was to get back to a real 2 TB storage space. If I cannot get those drives back in the White Box, I can re-use them elsewhere, but this would mean a waste of the firmware and network connection. After the fact, I read that, anyway, the network performance is rather poor. Thanks :-)

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Solaris cc segfaults when compiling rsync on intel platform

    - by PP
    I am trying to compile rsync-3.0.7 on Solaris 5.10 on an Intel chipset. When running ./configure I see the following (obviously erroneous lines): checking size of int... 0 checking size of long... 0 checking size of long long... 0 checking size of short... 0 checking size of int16_t... 0 checking size of uint16_t... 0 In config.log I see the following lines: configure.sh:5448: /tool/sunstudio12.1/bin/cc -xc99=all -o conftest -g -DHAVE_CONFIG_H conftest.c >&5 "conftest.c", line 123: warning: statement not reached cc: Fatal error in cc : Segmentation Fault configure.sh:5448: $? = 1 configure.sh: program exited with status 1 Segmentation fault? What could be causing a simple test script to segfault during compilation?

    Read the article

  • Limit HTTP VERBS on Apache2

    - by user72295
    I am trying to limit the use of certain HTTP verbs on my site. I entered the following into my VirtualHost config file within the Directory element: <Limit GET POST HEAD> Allow from all </Limit> <Limit PUT DELETE OPTIONS> Deny from all </Limit> This seemed to work but with unexpected results: I ran the following telnet/HTTP commands before and after this change, open server 80 OPTIONS server/abs_path HTTP/1.1 User-Agent: Telnet/1.0 Host: server before the change I received a successful response with the Allowed headers. After the change, however, I was expecting to receive a 405 'Method not allowed' response but rather I received a 403 'Access Forbidden' response. What do I need to change in apache to return the 405 HTTP response? Many thanks

    Read the article

  • Routing to various node.js servers on same machine

    - by Dtang
    I'd like to set up multiple node.js servers on the same machine (but listening on different ports) for different projects (so I can pull any down to edit code without affecting the others). However I want to be able to access these web apps from a browser without typing in the port number, and instead map different urls to different ports: e.g. 45.23.12.01/app - 45.23.12.01:8001. I've considered using node-http-proxy for this, but it doesn't yet support SSL. My hunch is that nginx might be the most suitable. I've never set up nginx before - what configuration do I need to do? The examples of config files I've seen only deal with subdomains, which I don't have. Alternatively, is there a better (stable, hassle-free) way of hosting multiple apps under the same IP address?

    Read the article

  • Bulk "open with" change on OS X 10.6

    - by railmeat
    I use Microsoft remote desktop to connect from my mac to the windows machine I work on. I have a directory with about 50 different rdp files. (The config files for each machine). They go changed to open with the remote desktop in Windows XP on Parallels. Is there a command I can use to change all those back to opening with the remote desktop on my Mac? I can use the right mouse click "Always open with" command, or the equivalent on the "Get Info" menu, but I would like to find a way to make that change in bulk. "Change All..." button on the "Get Info" menu does not have any effect.

    Read the article

  • Stunnel too many clients

    - by davidsmalley
    I'm trying to hook up stunnel and haproxy to forward https connections through to some backend servers. I've got haproxy setup right, and I seem to have stunnel set up right. Trouble is that when I hit the setup with a load test after a while I start to see these log entries: 2010.05.05 11:24:43 LOG7[3498:3086792368]: https accepted FD=512 from 10.195.158.225:52579 2010.05.05 11:24:43 LOG4[3498:3086792368]: Connection rejected: too many clients (=500) I guess I've hit a limit somewhere but I wasn't sure how to fix it, there doesn't seem to be a config file option for stunnel to change this. Does anyone know how to configure stunnel for a potentially large number of connections?

    Read the article

  • Postfix throttling for outgoing messages

    - by Sergey Kovalev
    I need Postfix to send outgoing messages (from local PHP) with a certain rate. Say, one message in 120 seconds. Any messages exceeding this rate should be queued (delayed) and delivered later. Policyd is not what I'm looking for. I don't need limiting overall number of messages sent. I need a pause (120s) between any two messages beeing sent. Tried this config, but it's not working: initial_destination_concurrency = 1 default_destination_concurrency_limit = 1 default_destination_rate_delay = 120 default_destination_recipient_limit = 1 default_process_limit = 1 Any suggestions?

    Read the article

  • Nginx Ip Whitelist

    - by Will
    Is it possible to create a ip whitelist for my nginx proxy server without adding allow or deny in the config file is it possible i can get nginx to link to a separate database to check if the user is allowed to access the website . Ideally i could do with nginx linking to an external database or at minimum a list off allowed ips on the same server so i can easily update the list whit out restarting nginx every time. In the future i would like to link nginx to my website and a user will login and there ip will be linked to there account and they will be able to update there ip if it has changed to there new one to grant them access so i need to keep in mind that it would be easyer to do this if i have external list off ips in some kind off database any help is apreshiated

    Read the article

  • shared web hosting architecture in a university setting

    - by gaspol
    We're in the process of creating a shared webhosting infrastructure for our university. Departments within the university can host their sites on this infrastructure. We're thinking of setting up multiple, load balanced web servers attached to shared storage (for web content and Apache config files). There will also be database servers behind these web servers. Does anyone have any other suggestions about this? Any recommendations for an alternative setup? Would having cPanel/WHM/Plesk be a good idea to automate account creation/maintenance?

    Read the article

  • PCRE limits exceeded, but triggering rules are SQL related

    - by Wolfe
    [Mon Oct 15 17:12:13 2012] [error] [client xx.xx.xx.xx] ModSecurity: Rule 1d4ad30 [id "300014"][file "/usr/local/apache/conf/modsec2.user.conf"][line "349"] - Execution error - PCRE limits exceeded (-8): (null). [hostname "domain.com"] [uri "/admin.php"] [unique_id "UHx8LEUQwYEAAGutKkUAAAEQ"] And similar are spamming my error log for apache. It's only the admin side.. and only these two lines in the config: line 349: #Generic SQL sigs SecRule ARGS "(or.+1[[:space:]]*=[[:space:]]1|(or 1=1|'.+)--')" "id:300014,rev:1,severity:2,msg:'Generic SQL injection protection'" And line 356: SecRule ARGS "(insert[[:space:]]+into.+values|select.*from.+[a-z|A-Z|0-9]|select.+from|bulk[[:space:]]+insert|union.+select|convert.+\(.*from)" Is there a way to fix this problem? Can someone explain what is going on or if these rules are even valid to cause this error? I know it's supposedly a recursion protection.. but these protect against SQL injection so I'm confused.

    Read the article

  • Configuring Mail Relay

    - by ServerChecker
    I'm running Ubuntu Server 9.10 with Postfix and Webmin. I have created virtual hosts for 3 domains following this serverfault.com answer. But the mail isn't relaying out to the world. I have 3 domains tied into my DNS in webmin, as well as inside DNS clicked Mail Server and followed that instruction using this article on the web. The domains and the web servers work just fine. I also have FTP working just fine. So, the remaining problem I have is mail. Can't forward mail out to a Gmail account for some reason. Note I'm just trying to do the "easy version" of Postfix config and if your answer is in Webmin-ease, that would help me. However, I can edit a text file if you suggest.

    Read the article

  • SVN : how to change hostname?

    - by elon
    I'd like to sep up SVN repo on local machine. But we already have apache running under localhost. When I use instalator form subversion site with apache option it installs another apache and when I type "localhost" in browser I see this new apache (not the old one). Question is how to run this new apache under other host name. When installing it asks about it, so I set different name, but it still works under localhost (nothing happens). I'd like to have access to svn via URL e.g. "svnrepo" not "localhost". What can I do about it? Which lines of config should be changed (and/or what's more should be changed?) Another way I'm thinking of to solve this problem is to integrate this svn-apache module with mine apache. But still I don't really know how to do it (my apache is 2.2.6)

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • issue using Postfix as authen SMTP client relay to Exchange 2010

    - by Gk
    Hi, I'm using postfix to relay mail to Exchange 2010. Here is my config: relayhost = [smtp.exchange.2010] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/relay_passwd smtp_sasl_security_options = #smtp_sasl_mechanism_filter = ntlm (/etc/postfix/relay_passwd contains login information of some accounts on Exchange) With this configuration I can relay email to Exchange. The problem is: the message send from Postfix has header: X-MS-Exchange-Organization-AuthAs: Anonymous and the message is treated like unAuthenicated message on Exchange system (i.e when sending to distribution group require senders are authenicated, I received error: #550 5.7.1 RESOLVER.RST.AuthRequired; authentication required ##rfc822;[email protected]). I using Outlook with the same account as in Postfix and it can send without problem. The different I realized between two case is: Outlook send with NTLM auth mech, Postfix using LOGIN mech. Any idea?

    Read the article

  • Apache server as reverse proxy is removing xmlns info from html tag

    - by Johnco
    I have a Java application running in tomcat, in front of which I have an Apache http server as a reverse proxy. However, the proxy is removing all xmlns data from the html tag, which breaks all the Facebook's FBML which is never parsed. My current config is as follows: ProxyRequests off ProxyHTMLDocType XHTML ProxyPassReverseCookiePath /cas / <Location /> ProxyPass http://localhost:8080/cas ProxyPassReverse http://localhost:8080/cas </Location> ProxyHTMLURLMap /cas / SetOutputFilter proxy-html <Proxy *> Order deny,allow Allow from all Satisfy all </Proxy> Thanks in advance.

    Read the article

  • IIS7 FastCGI downloads quit at 4128760 bytes on slow connections

    - by eingko
    I'm using FastCGI via IIS7 to host a PHP application. For whatever reason downloads that are streamed via PHP (i.e. a script that outputs a file/bytes as a response) download perfectly fine on high-speed connections but on anything slower (even on DSL) quits at EXACTLY 4128760 bytes (~3.9MB) - which makes me think it's a configuration issue... We only started having this problem when we switched from Apache to IIS - this also points to a configuration problem, I think. But if it's a configuration issue, why would it only affect slower connections? Does anyone know where (or how) I could possibly change a setting like this? I've tried changing the idleTimeout, executionTimeout, and activityTimeout values in my web.config but this hasn't helped at all. Any help or direction would be greatly appreciated. Thanks in advance.

    Read the article

  • Dynamically loading chef recipies from URLs

    - by andy
    I'm deploying a web app on AWS. I intend to use chef to build AMIs which I'll then put into production. I want to have Chef monitor a URL stored in simpleDB. The URL would point to a tarball in S3. There would be different URLs, one for a config tarball, one for a code tarball. When I update the URL in simpleDB, I want chef to spot this and pull in and apply the configs/deploy the code. Is this possible? Has anything like this been done before or would I need to roll my own code? I think Chef can monitor URLs, but how would be the best way of getting it to load that URL from simpleDB?

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • How do you enable multi-core virtualization in Windows 8 Pro?

    - by Greg B
    I've just got a new Dell Vostro 470 with a quad core (8 threads) i7 3770 and I'm trying to run virtual machines on it, which works fine, except if I want to assign multiple cores to a VM. I've checked the bios which states Intel Virtualization Technology [Enabled], but both Hyper-V and VirtualBox will only allow me to assign a single core. If I run the Intel Processor Identification Utility on the host OS it tells me that Intel Virtualization Technology isn't supported by the processor, but according to the Intel website, it is. So whats going on? Have Dell clipped the i7's wings? Is there some config in Windows I need to change?

    Read the article

  • Kerberos: Running an app with a parameter using krenew

    - by Mihai Todor
    I need to run an application with krenew, but the application also needs to receive a parameter via command line and I need to send its output to a file. From the documentation, it looks like this should do the trick: krenew -t -- sh -c 'compute-job > /afs/local/data/output' but, unfortunately, when I run the command below: krenew -s -- sh -c './my_app config.xml > results/test.txt &' the application just dies after a while and I can see from the output of ps aux that krenew is not running along with my_app. I am not sure what the parameter -t does, and as far as I can see, if I run krenew -s ./my_app, it works properly. I hope someone can clarify this.

    Read the article

  • Too much free space on FreeNAS - ZFS

    - by Guillaume
    I have a FreeNAS server with 3 x 2 To disks in raidz1. I would expect to have about 4 To of space available. When I run zpool list I get: [root@freenas] ~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT main_volume 5.44T 3.95T 1.49T 72% ONLINE /mnt I was expecting a size of 4 To. Also, used space as reported by zpool list does not match what's reported by du: [root@freenas] ~# du -sh /mnt/main_volume/ 2.6T /mnt/main_volume/ There are quite a few things that I dont yet completely understand about ZFS. But at the moment I am mostly worried that I misconfigured my system and that I dont have any storage redundancy. How can I make sure I did not do an horrible mistake ... For the sake of completeness, here is the output of zpool status: [root@freenas] ~# zpool status pool: main_volume state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM main_volume ONLINE 0 0 0 raidz1 ONLINE 0 0 0 gptid/d8584e45-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d8f7df30-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d9877cc3-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >