Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 800/842 | < Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >

  • Comparing char for validation in C++

    - by Corey Starbird
    /* PROGRAM: Ch6_14.cpp Written by Corey Starbird This program calculates the balance owed to a hospital for a patient. Last modified: 10/28/13 */ #include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; // Prototypes for In-patient and Out-patient functions. double stayTotal (int, double, double, double); // For In-patients double stayTotal (double, double); // For Out-patients int main() { char patientType; // In-patient (I or i) or Out-patient (O or o) double rate, // Daily rate for the In-patient stay servCharge, // Service charge for the stay medCharge, // Medication charge for the stay inTotal, // Total for the In-patient stay outTotal; // Total for the Out-patient stay int days; // Number of days for the In-patient stay // Find out if they were an In-patient or an Out-patient cout << "Welcome, please enter (I) for an In-patient or (O) for an Out-patient:" << endl; cin >> patientType; while (patientType != 'I' || 'i' || 'O' || 'o') { cout << "Invalid entry. Please enter either (I) for an In-patient or (O) for an Out-patient:" << endl; cin >> patientType; } cout << "FIN"; return 0; } Hey, brand new to C++ here. I am working on a project and I'm having trouble figuring out why my validation for patientTypeisn't working properly. I first had double quotes, but realized that would denote strings. I changed them to single quotes, my program will compile and run now, but the while loop runs no matter what I enter, I, i, O, o, or anything else. I don't know why the while loop isn't checking the condition, seeing that I did enter one of the characters in the condition, and move on to cout. Probably a simple mistake, but I thank you in advance.

    Read the article

  • How to pass multiple PHP variables to a jQuery function?

    - by jcpeden
    I'm working on a Wordpress plugin. I need to pass plugin directories (that can change depending on an individual's installation) to a jquery function. What is the best way of doing this? The version of the plugin that I can to work on had included all the javascript in the PHP file so the functions were parsed along with the rest of the content before being rendered in a browser. I'm looking at AJAX but I think it might be more complicated than I need. I can get away with just two variables in this case (directories, nothing set by the user). As I've read its good practice, I'm trying to keep the js and php separate. When the plugin initializes, it call the js file: //Wordpress calls the .js when the plugin loads wp_enqueue_script( 'wp-backitup-funtions', plugin_dir_url( __FILE__ ) . 'js/wp-backitup.js', array( 'jquery' ) ); Then I'm in the .js file and need to figure out how to generate the following variables: dir = '<?php echo content_url() ."/plugins"; ?>'; dir = '<?php echo content_url() ."/themes"; ?>'; dir = '<?php echo content_url() ."/uploads"; ?>'; And run the parse the following requests: xmlhttp.open("POST","<?php echo plugins_url() .'/wp-backitup/includes/wp-backitup-restore.php'); ?>",true); xmlhttp.open("POST","<?php echo plugins_url() .'/wp-backitup/includes/wp-backitup-start.php'); ?>",true); xmlhttp.open("POST","<?php echo plugins_url() .'/wp-backitup/wp-backitup-directory.php'); ?>",true); xmlhttp.open("POST","<?php echo plugins_url() .'/wp-backitup/wp-backitup-db.php'); ?>",true); window.location = "<?php echo plugins_url() .'/wp-backitup/backitup-project.zip'); ?>"; xmlhttp.open("POST","<?php echo plugins_url() .'/wp-backitup/wp-backitup-delete.php'); ?>",true); Content URL and Plugins URL differ only by /plugins/ so if I was hard pressed, I would only really need to make a single PHP request and then bring this into the JS.

    Read the article

  • How to iterate a list inside a list in java?

    - by user2142786
    Hi i have two value object classes . package org.array; import java.util.List; public class Father { private String name; private int age ; private List<Children> Childrens; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public List<Children> getChildrens() { return Childrens; } public void setChildrens(List<Children> childrens) { Childrens = childrens; } } second is for children package org.array; public class Children { private String name; private int age ; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } and i want to print there value i nested a list inside a list here i am putting only a single value inside the objects while in real i have many values . so i am nesting list of children inside father list. how can i print or get the value of child and father both. here is my logic. package org.array; import java.util.ArrayList; import java.util.Iterator; import java.util.List; public class ArrayDemo { public static void main(String[] args) { List <Father> fatherList = new ArrayList<Father>(); Father father = new Father(); father.setName("john"); father.setAge(25); fatherList.add(father); List <Children> childrens = new ArrayList<Children>(); Children children = new Children(); children.setName("david"); children.setAge(2); childrens.add(children); father.setChildrens(childrens); fatherList.add(father); Iterator<Father> iterator = fatherList.iterator(); while (iterator.hasNext()) { System.out.println(iterator.toString()); } } }

    Read the article

  • allignment issue of div tag

    - by Quasar the space thing
    I am trying to create a web page where on click of a button I can add div tags. What I thought to do was that I'll create two div tags within a single div so that over all presentation will be uniform and similar to a table having two columns and multiple rows and the first column contains only label's and second column will contain textbox. Here is the JS file : var counter = 0; function create_div(type){ var dynDiv = document.createElement("div"); dynDiv.id = "divid_"+counter; dynDiv.class="main"; document.body.appendChild(dynDiv); question(); if(type == 'ADDTEXTBOX'){ ADDTEXTBOX(); } counter=counter+1; } function question(){ var question_div = document.createElement("div"); question_div.class="question"; question_div.id = "question_div_"+counter; var Question = prompt("Enter The Question here:", ""); var node=document.createTextNode(Question); question_div.appendChild(node); var element=document.getElementById("divid_"+counter); element.appendChild(question_div); } function ADDTEXTBOX(){ var answer_div = document.createElement("div"); answer_div.class="answer"; answer_div.id = "answer_div_"+counter; var answer_tag = document.createElement("input"); answer_tag.id = "answer_tag_"+counter; answer_tag.setAttribute("type", "text"); answer_tag.setAttribute("name", "textbox"); answer_div.appendChild(answer_tag); var element=document.getElementById("divid_"+counter); element.appendChild(answer_div); } Here is the css file : .question { width: 40%; height: auto; float: left; display: inline-block; text-align: justify; word-wrap:break-word; } .answer { padding-left:10%; width: 40%; height: auto; float: left; overflow: auto; word-wrap:break-word; } .main { width: auto; background-color:gray; height: auto; overflow: auto; word-wrap:break-word; } My problem is that the code is working properly but both the divisions are not coming in a straight line. after the first div prints on the screen the second divisions comes in another line. How can I make both the div's come in the same line ? Thank You. PS : should I stick with the current idea of using div or should I try some other approach ? like tables ?

    Read the article

  • Does this type of function or technique have a name?

    - by DHR
    HI there, I'm slightly new to programming, more of a hobby. I am wondering if a the following logic or technique has a specific name, or term. My current project has 7 check boxes, one for each day of the week. I needed an easy to save which boxes were checked. The following is the method to saved the checked boxes to a single number. Each checkbox gets a value that is double from the last check box. When I want to find out which boxes are checked, I work backwards, and see how many times I can divide the total value by the checkbox value. private int SetSelectedDays() { int selectedDays = 0; selectedDays += (dayMon.Checked) ? 1 : 0; selectedDays += (dayTue.Checked) ? 2 : 0; selectedDays += (dayWed.Checked) ? 4 : 0; selectedDays += (dayThu.Checked) ? 8 : 0; selectedDays += (dayFri.Checked) ? 16 : 0; selectedDays += (daySat.Checked) ? 32 : 0; selectedDays += (daySun.Checked) ? 64 : 0; return selectedDays; } private void SelectedDays(int n) { if ((n / 64 >= 1) & !(n / 64 >= 2)) { n -= 64; daySun.Checked = true; } if ((n / 32 >= 1) & !(n / 32 >= 2)) { n -= 32; daySat.Checked = true; } if ((n / 16 >= 1) & !(n / 16 >= 2)) { n -= 16; dayFri.Checked = true; } if ((n / 8 >= 1) & !(n / 8 >= 2)) { n -= 8; dayThu.Checked = true; } if ((n / 4 >= 1) & !(n / 4 >= 2)) { n -= 4; dayWed.Checked = true; } if ((n / 2 >= 1) & !(n / 2 >= 2)) { n -= 2; dayTue.Checked = true; } if ((n / 1 >= 1) & !(n / 1 >= 2)) { n -= 1; dayMon.Checked = true; } if (n > 0) { //log event } } The method works well for what I need it for, however, if you do see another way of doing this, or a better way to writing, I would be interested in your suggestions.

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Indexing on only part of a field in MongoDB

    - by Rob Hoare
    Is there a way to create an index on only part of a field in MongoDB, for example on the first 10 characters? I couldn't find it documented (or asked about on here). The MySQL equivalent would be CREATE INDEX part_of_name ON customer (name(10));. Reason: I have a collection with a single field that varies in length from a few characters up to over 1000 characters, average 50 characters. As there are a hundred million or so documents it's going to be hard to fit the full index in memory (testing with 8% of the data the index is already 400MB, according to stats). Indexing just the first part of the field would reduce the index size by about 75%. In most cases the search term is quite short, it's not a full-text search. A work-around would be to add a second field of 10 (lowercased) characters for each item, index that, then add logic to filter the results if the search term is over ten characters (and that extra field is probably needed anyway for case-insensitive searches, unless anybody has a better way). Seems like an ugly way to do it though. [added later] I tried adding the second field, containing the first 12 characters from the main field, lowercased. It wasn't a big success. Previously, the average object size was 50 bytes, but I forgot that includes the _id and other overheads, so my main field length (there was only one) averaged nearer to 30 bytes than 50. Then, the second field index contains the _id and other overheads. Net result (for my 8% sample) is the index on the main field is 415MB and on the 12 byte field is 330MB - only a 20% saving in space, not worthwhile. I could duplicate the entire field (to work around the case insensitive search problem) but realistically it looks like I should reconsider whether MongoDB is the right tool for the job (or just buy more memory and use twice as much disk space). [added even later] This is a typical document, with the source field, and the short lowercased field: { "_id" : ObjectId("505d0e89f56588f20f000041"), "q" : "Continental Airlines", "f" : "continental " } Indexes: db.test.ensureIndex({q:1}); db.test.ensureIndex({f:1}); The 'f" index, working on a shorter field, is 80% of the size of the "q" index. I didn't mean to imply I included the _id in the index, just that it needs to use that somewhere to show where the index will point to, so it's an overhead that probably helps explain why a shorter key makes so little difference. Access to the index will be essentially random, no part of it is more likely to be accessed than any other. Total index size for the full file will likely be 5GB, so it's not extreme for that one index. Adding some other fields for other search cases, and their associated indexes, and copies of data for lower case, does start to add up, which I why I started looking into a more concise index.

    Read the article

  • More efficient SQL than using "A UNION (B in A)"?

    - by machinatus
    Edit 1 (clarification): Thank you for the answers so far! The response is gratifying. I want to clarify the question a little because based on the answers I think I did not describe one aspect of the problem correctly (and I'm sure that's my fault as I was having a difficult time defining it even for myself). Here's the rub: The result set should contain ONLY the records with tstamp BETWEEN '2010-01-03' AND '2010-01-09', AND the one record where the tstamp IS NULL for each order_num in the first set (there will always be one with null tstamp for each order_num). The answers given so far appear to include all records for a certain order_num if there are any with tstamp BETWEEN '2010-01-03' AND '2010-01-09'. For example, if there were another record with order_num = 2 and tstamp = 2010-01-12 00:00:00 it should not be included in the result. Original question: Consider an orders table containing id (unique), order_num, tstamp (a timestamp), and item_id (the single item included in an order). tstamp is null, unless the order has been modified, in which case there is another record with identical order_num and tstamp then contains the timestamp of when the change occurred. Example... id order_num tstamp item_id __ _________ ___________________ _______ 0 1 100 1 2 101 2 2 2010-01-05 12:34:56 102 3 3 113 4 4 124 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 8 6 100 9 6 2010-01-13 08:33:55 105 What is the most efficient SQL statement to retrieve all of the orders (based on order_num) which have been modified one or more times during a certain date range? In other words, for each order we need all of the records with the same order_num (including the one with NULL tstamp), for each order_num WHERE at least one of the order_num's has tstamp NOT NULL AND tstamp BETWEEN '2010-01-03' AND '2010-01-09'. It's the "WHERE at least one of the order_num's has tstamp NOT NULL" that I'm having difficulty with. The result set should look like this: id order_num tstamp item_id __ _________ ___________________ _______ 1 2 101 2 2 2010-01-05 12:34:56 102 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 The SQL that I came up with is this, which is essentially "A UNION (B in A)", but it executes slowly and I hope there is a more efficient solution: SELECT history_orders.order_id, history_orders.tstamp, history_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09') AS history_orders UNION SELECT current_orders.order_id, current_orders.tstamp, current_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp IS NULL) AS current_orders WHERE current_orders.order_id IN (SELECT orders.order_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09');

    Read the article

  • Apache 2.2 and FastCGI stops responding, warnings, crashes

    - by Brett
    I've seen this question posted a few times using a Google search, with no real answers. I have a multi-threaded FastCGI application running with Apache 2.2 on FreeBSD 7.2. There are a few issues with it, and I am unable to really figure out the source of the problem even after poking through a bunch of the mod_fastcgi source code. My FastCGI application gets anywhere from 2 to 15 or so hits per second, and mostly services a back-end API (the majority of web server usage is for this, and not actually serving content). Everything seems to work ok under normal conditions, but recently this problem has been becoming worse. It starts out with the FastCGI process manager apparently trying to close unneeded processes, sending them a SIGTERM signal. I catch the signal, clean up some stuff, and exit (by calling exit()) with status code 0. This process seems to result in three log messages in my httpd error log: [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" (pid 98182) termination signaled [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" (pid 98182) terminated by calling exit with status '0' [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" restarted (pid 98294) I am not sure why it says it is restarting the process, but in any case no core dump is ever generated so I do believe it is the FastCGI process manager doing it's thing. This makes sense because it begins to happen after the initial load increase from restarting Apache. Since it's down for a few seconds, it gets hit with a couple of hundred requests over the first few seconds it's running again (sometimes even hitting the upper limit of MAXCLIENTS in Apache), and this seems to be the process manager doing the work of spawning more processes to handle the increased load. So this all seems fine, but here is where things get weird. There are really two problems that I see. First, my multithreaded FastCGI process spawns 25 worker threads, and all seem to be used according to my internal log files (multiple processes are clearly using multiple threads to do work). However it seems that 3 or 4 FastCGI processes is not enough to handle the 5 to 15 hit per second load, even though the requests take about .02s or so to process internally. In order to be at all responsive, it seems I need 50 or more FastCGI processes, leading me to believe that FastCGI does not realize that my program is multithreaded. I've read the documentation at http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html and do not see any option pertaining to multithreaded-ness, and my internal code is more or less set up just like the examples provided by the FastCGI library. The second problem I am having is that once process termination has happened a bunch of times as above (and seemingly at random), I begin getting a lot of these messages in my error log: [Tue Jun 01 14:06:22 2010] [warn] (32)Broken pipe: FastCGI: write() to PM failed (ignore if a restart or shutdown is pending) The messages occur for about half the hits I get to the server, and it completely kills the responsiveness of my application - it seems FastCGI will look for a working "pipe" until it finds one, and fail to realize that whatever application it is trying to contact is dead. It does still work though, it's just incredibly unresponsive - sometimes taking up to 40 or so seconds to process a request. I recompiled mod_fastcgi with some extra debugging around the point of the error message, and it appears that the error happens when it tries to write() to the application. The call to write() fails with a -1 return code, and sets errno to EPIPE. I am noticing that the issue happens mostly when either a crash occurs in one of the FastCGI processes, or a bunch of them are seemingly terminated by the process manager. I haven't had any core dumps though, except for one, where the backtrace outputted by gdb is just a single call to free() at address 0x0000000000000000 with nothing else in the stack trace, so I don't really know what to make of that. I'm thinking it happens sometime after the SIGTERM signal is caught, maybe some global variable not being cleaned up properly or something.

    Read the article

  • High apache load but little traffic logged

    - by nrambeck
    I recently installed Varnish to sit in front of Apache on a dedicated server running a single site. It appears to be working well, but the load on Apache is still very high. What doesn't make sense is that the Apache access log shows almost no traffic getting past Varnish. When I tail the apache log I see about 1-3 hits per second come through. Here is what the load on Apache looks like : USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND apache 13834 8.1 1.0 107716 34164 ? S 08:24 0:02 /usr/sbin/httpd apache 13835 8.1 1.0 107716 33856 ? S 08:24 0:02 /usr/sbin/httpd apache 11483 7.9 0.9 105916 30788 ? S 08:23 0:06 /usr/sbin/httpd apache 12255 7.5 1.0 107476 33312 ? S 08:24 0:04 /usr/sbin/httpd apache 9340 7.2 1.1 107732 34916 ? R 08:23 0:09 /usr/sbin/httpd apache 12029 6.8 0.9 106908 30416 ? S 08:24 0:04 /usr/sbin/httpd apache 11577 6.7 1.0 107192 34180 ? S 08:24 0:05 /usr/sbin/httpd apache 11486 6.6 1.0 106176 33112 ? S 08:23 0:05 /usr/sbin/httpd apache 11796 6.4 1.0 106936 31916 ? S 08:24 0:04 /usr/sbin/httpd apache 13815 6.3 1.0 107988 34464 ? S 08:24 0:02 /usr/sbin/httpd apache 18089 6.3 1.3 107444 43212 ? S 08:11 0:52 /usr/sbin/httpd apache 11797 5.9 1.0 107716 34580 ? S 08:24 0:04 /usr/sbin/httpd apache 7655 5.9 0.0 0 0 ? Z 08:22 0:09 [httpd] <defunct> mysql 8033 5.9 6.2 318240 199512 ? Sl May14 352:34 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/va apache 11488 5.8 0.9 106924 31632 ? S 08:23 0:04 /usr/sbin/httpd apache 9375 5.7 1.1 106956 35552 ? S 08:23 0:07 /usr/sbin/httpd apache 3551 5.6 1.1 106956 36140 ? S 08:20 0:14 /usr/sbin/httpd apache 7657 5.6 1.0 106968 32472 ? S 08:22 0:09 /usr/sbin/httpd apache 11433 5.6 1.0 107716 34396 ? S 08:23 0:04 /usr/sbin/httpd apache 5505 5.5 1.1 106944 34924 ? S 08:21 0:12 /usr/sbin/httpd apache 7172 5.3 1.1 106972 35368 ? S 08:22 0:09 /usr/sbin/httpd apache 10088 5.2 0.9 106160 31240 ? S 08:23 0:04 /usr/sbin/httpd apache 7656 5.1 1.0 106436 34388 ? S 08:22 0:08 /usr/sbin/httpd apache 3468 5.0 1.1 107716 35968 ? S 08:20 0:13 /usr/sbin/httpd apache 14242 4.8 1.0 107728 33032 ? S 08:25 0:00 /usr/sbin/httpd apache 3578 4.8 1.1 107988 35964 ? S 08:20 0:12 /usr/sbin/httpd apache 28192 4.8 1.2 106944 38060 ? S 08:17 0:23 /usr/sbin/httpd apache 3277 4.6 1.1 106956 35688 ? S 08:20 0:13 /usr/sbin/httpd apache 15434 3.7 0.7 106908 24684 ? S 08:25 0:00 /usr/sbin/httpd There is a default apache log and then one other VirtualHost log setup. I'm concerned that Apache is handling some kind of traffic that is not being logged. Is that possible? And is there anything I can do to capture that traffic?

    Read the article

  • How do I manipulate Handler Mappings cleanly in IIS7 using the Microsoft.Web.Administration namespac

    - by Kev
    I asked this over on Stack Overflow but maybe it's something an experienced IIS 7 administrator might know more about, so I'm asking here as well. When manipulating Handler Mappings using the Microsoft.Web.Administration namespace, is there a way to remove the <remove name="handler name"> tag added at the site level. For example, I have a site which inherits all the handler mappings from the global handler mappings configuration. In applicationHost.config the <location> tag initially looks like this: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> </system.webServer> </location> To remove a handler I use code similar this: string siteName = "60030 - testsite-60030.com"; string handlerToRemove = "ASPClassic"; using(ServerManager sm = new ServerManager()) { Configuration siteConfig = serverManager.GetApplicationHostConfiguration(); ConfigurationSection handlersSection = siteConfig.GetSection("system.webServer/handlers", siteName); ConfigurationElementCollection handlersCollection = handlersSection.GetCollection(); ConfigurationElement handlerElement = handlersCollection .Where(h => h["name"].Equals(handlerMapping.Name)).Single(); handlersCollection.Remove(handlerElement); } The equivalent APPCMD instruction would be: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /-[name='ASPClassic'] /commit:apphost This results in the site's <location> tag looking like: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <remove name="ASPClassic" /> </handlers> </system.webServer> </location> So far so good. However if I re-add the ASPClassic handler this results in: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <!-- Why doesn't <remove> get removed instead of tacking on an <add> directive? --> <remove name="ASPClassic" /> <add name="ASPClassic" path="*.asp" verb="GET,HEAD,POST" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" /> </handlers> </system.webServer> </location> This happens when using both the Microsoft.Web.Administration namespace and C# or using the following APPCMD command: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /+[name='ASPClassic',path='*.asp',verb=;'GET,HEAD,POST',modules='IsapiModule',scriptProcessor='%windir%\system32\inetsrv\asp.dll',resourceType='File'] /commit:apphost This can result in a lot of cruft over time for each website that's had a handler removed then re-added programmatically. Is there a way to just remove the <remove name="ASPClassic" /> tag using the Microsoft.Web.Administration namespace code or APPCMD?

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • How do I troubleshoot a "Bad Request" in Apache2?

    - by Nick
    I have a PHP application that loads for all URLs except the home page. Visiting "https://my.site.com/" produces a "Bad Request" error message. Any other URL, for example, "https://my.site.com/SomePage/" works just fine. It's only the home page that does not work. All pages use mod_rewrite and get routed through a single dispatch script, Director.php. Accessing Director.php directly also produces the "Bad Request" error. BUT- ALL of the other requests go through Director, and they all work just fine, (excluding the home page), so it can't be an issue with the Director.php script? OR can it? I'm not seeing anything in the Apache2 error log, and I'm not seeing any PHP errors in the PHP Error log. I've tried changing the first line of Director.php to read: echo 'test'; exit(); But I still get a "Bad Request". This is the rewrite log for a request to the home page: 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (2) init rewrite engine with requested uri / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (1) pass through / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) init rewrite engine with requested uri /Director.php 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) rewrite '/Director.php' - '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) local path result: -[L,NC] Apache2 Access Log my.site.com:443 123.123.123.123 - - [18/Feb/2011:05:44:19 +0000] "GET / HTTP/1.1" 400 3223 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8" Any ideas? I don't know what else to try? UPDATE: Here's my vhost conf: RewriteEngine On RewriteLog "/LiveWebs/mysite.com/rewrite.log" RewriteLogLevel 5 # Dont rewite Crons folder ReWriteRule ^/Crons/ - [L,NC] ReWriteRule ^/phpmyadmin - [L,NC] ReWriteRule .php$ -[L,NC] # this is the problem!! RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] The problem is the line "ReWriteRule .php$ -[L,NC]". When I comment it out, the home page loads. The question is, how do I make URLS that actually end in .php go straight through (without breaking the home page)?

    Read the article

  • Squid refresh_pattern won't cache "Expires: ..."

    - by Marcelo Cantos
    Background I frequent the OpenGL ES documentation site at http://www.khronos.org/opengles/sdk/1.1/docs/man/. Even though the content is completely static, it seems to force a reload on every single page I visit, which is very annoying. I have a squid 3.0 proxy set up (apt-get install squid3 on Ubuntu 10.04), and I added a refresh_pattern to force the pages to cache: refresh_pattern ^http://www.khronos.org/opengles/sdk/1\.1/docs/man/ … 1440 20% 10080 … override-expire ignore-reload ignore-no-cache ignore-private ignore-no-store This is all on one line, of course. While this appears to work for the XHTML documents (e.g., glBindTexture), it fails to cache the linked content, such as the DTD, some .ent files (?) and some XSL files. The delay in fetching these extra files delays rendering of the main document, so my principal annoyance isn't fixed. The only difference I can glean with these ancillary files is that they come with an Expires: header set to the current time, whereas the XHTML document has none. But I would have expected the override-expire option to fix this. I have confirmed that documents have the same base URL. I have also truncated the pattern to varying degrees, with no effect. My questions Why does the override-expire option not seem to work? Is there a simple way to tell squid to unconditionally cache a document, no matter what it finds in the response headers? (Hopefully) relevant output cache.log Jan 01 10:33:30 1970/06/25 21:18:27| Processing Configuration File: /etc/squid3/squid.conf (depth 0) Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'override-expire' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-reload' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-cache' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-store' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-private' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| DNS Socket created at 0.0.0.0, port 37082, FD 10 Jan 01 10:33:30 1970/06/25 21:18:27| Adding nameserver 192.168.1.1 from /etc/resolv.conf Jan 01 10:33:30 1970/06/25 21:18:27| Accepting HTTP connections at 0.0.0.0, port 3128, FD 11. Jan 01 10:33:30 1970/06/25 21:18:27| Accepting ICP messages at 0.0.0.0, port 3130, FD 13. Jan 01 10:33:30 1970/06/25 21:18:27| HTCP Disabled. Jan 01 10:33:30 1970/06/25 21:18:27| Loaded Icons. Jan 01 10:33:30 1970/06/25 21:18:27| Ready to serve requests. access.log Jun 25 21:19:35 2010.710 0 192.168.1.50 TCP_MEM_HIT/200 2452 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/glBindTexture.xml - NONE/- text/xml Jun 25 21:19:36 2010.263 543 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.276 556 192.168.1.50 TCP_MISS/304 370 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.666 278 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.958 279 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.251 276 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl - NONE/- text/xml Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl - NONE/- text/xml store.log Jun 25 21:19:36 2010.263 RELEASE -1 FFFFFFFF D3056C09B42659631A65A08F97794E45 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd Jun 25 21:19:36 2010.276 RELEASE -1 FFFFFFFF 9BF7F37442FD84DD0AC0479E38329E3C 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl Jun 25 21:19:36 2010.666 RELEASE -1 FFFFFFFF 7BCFCE88EC91578C8E2589CB6310B3A1 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent Jun 25 21:19:36 2010.958 RELEASE -1 FFFFFFFF ECF1B24E437CFAA08A2785AA31A042A0 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent Jun 25 21:19:37 2010.251 RELEASE -1 FFFFFFFF 36FE3D76C80F0106E6E9F3B7DCE924FA 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF A33E5A5CCA2BFA059C0FA25163485192 304 1277462871 1221139523 1277462871 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF E2CF8854443275755915346052ACE14E 304 1277462872 1221139523 1277462872 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl

    Read the article

  • Monit won't run

    - by Yaniro
    I have two identical EC2 instances (the second is a replica of the first), running Gentoo. The first instance has monit running which monitors a single process and some system resources and functions great. In the second instance, monit runs but quits right away. The configuration is similar on both instances so are the versions of monit. monit.log shows: [GMT Oct 3 08:36:41] info : monit daemon with PID 5 awakened Final lines on strace monit show: write(2, "monit daemon with PID 5 awakened"..., 33monit daemon with PID 5 awakened ) = 33 time(NULL) = 1349252827 open("/etc/localtime", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 fstat64(4, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb773a000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\0\0"..., 4096) = 118 _llseek(4, -6, [112], SEEK_CUR) = 0 read(4, "\nGMT0\n", 4096) = 6 close(4) = 0 munmap(0xb773a000, 4096) = 0 write(3, "[GMT Oct 3 08:27:07] info :"..., 33) = 33 write(3, "monit daemon with PID 5 awakened"..., 33) = 33 waitpid(-1, NULL, WNOHANG) = -1 ECHILD (No child processes) close(3) = 0 exit_group(0) = ? No core dumps (ulimit -c shows unlimited) monit -v shows: monit: Debug: Adding host allow 'localhost' monit: Debug: Skipping redundant host 'localhost' monit: Debug: Skipping redundant host 'localhost' monit: Debug: Adding credentials for user 'xxxx'. Runtime constants: Control file = /etc/monitrc Log file = /var/log/monit/monit.log Pid file = /var/run/monit.pid Id file = /var/run/monit.pid Debug = True Log = True Use syslog = False Is Daemon = True Use process engine = True Poll time = 30 seconds with start delay 0 seconds Expect buffer = 256 bytes Event queue = base directory /var/monit with 100 slots Mail server(s) = xx.xxx.xx.xxx with timeout 30 seconds Mail from = (not defined) Mail subject = (not defined) Mail message = (not defined) Start monit httpd = True httpd bind address = Any/All httpd portnumber = 2812 httpd signature = True Use ssl encryption = False httpd auth. style = Basic Authentication and Host/Net allow list Alert mail to = [email protected] Alert on = All events The service list contains the following entries: System Name = xxxx Monitoring mode = active CPU wait limit = if greater than 20.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert CPU system limit = if greater than 30.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert CPU user limit = if greater than 70.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Swap usage limit = if greater than 25.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Memory usage limit = if greater than 75.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Load avg. (5min) = if greater than 2.0 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Load avg. (1min) = if greater than 4.0 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Process Name = xxxx Group = server Pid file = /var/run/xxxx.pid Monitoring mode = active Start program = '/etc/init.d/xxxx restart' timeout 20 second(s) Stop program = '/etc/init.d/xxxx stop' timeout 30 second(s) Existence = if does not exist 1 times within 1 cycle(s) then restart else if succeeded 1 times within 1 cycle(s) then alert Pid = if changed 1 times within 1 cycle(s) then alert Ppid = if changed 1 times within 1 cycle(s) then alert Timeout = If restarted 3 times within 5 cycle(s) then unmonitor Alert mail to = [email protected] Alert on = All events Alert mail to = [email protected] Alert on = All events ------------------------------------------------------------------------------- monit daemon with PID 5 awakened Ran emerge --sync before emerge -va monit which installed monit v5.3.2. When that didn't work i've downloaded v5.5 from their website and compiled from source which did not work either.

    Read the article

  • Graphics card initialisation problems when booting - requires a "double" boot

    - by DMA57361
    Problem Outline When booting from cold (and my machine is disconnected from main power when off, but leaving it connected doesn't help) the graphics card (single PCI-e card GeForce 460) will not initialise on the first boot, leaving me with the motherboards on-board graphics (which kick in automatically if no PCI-e card is found). However, if I restart the computer - normally I do this by powering it off just after the numlock lights up on the keyboard (ie, just after POST/BIOS and before Windows takes over), wait for the system to whirr down, and power up again - the graphics card will work correctly. Once double-booted in this matter the system seems to work correctly - with no noticeable problems. This is reproducible every time I try to boot - it has been working like this for about a month now. Background Information Sept 2010 - I suffered a hardware malfunction (crashes in Windows and graphics corruption on BIOS screens). By way of spare hardware I determined that replacing the PSU removed the issue, so I replaced the PSU with a brand new one of slightly higher power (460W replaced with 500W). Oct 2010 - The problem resurfaced. I purchased a new graphics card (GeForce 460), which removed the problem. The new graphics card immediately started having the boot initialisation problems mentioned. I presumed there was a motherboard fault all along, but because the system worked once booted, and I was temporarily out of spare money, I left the system alone and continued to use it. Early/Mid Dec 2010 - In the space of 5 days I recieved 3 instances of hard drive corruption (seemlingly fixed by chkdsk and sfc in each case...). Since I was already under the impression the motherboard was faulty, I purchased a new one ASAP, this also required new RAM (as I dropped from 4 slots to 2 and didn't want to drop mem quantity). Past 3-4 weeks - With a brand new PSU, Graphics Card, Motherboard and RAM I'm suffering the problem outlined above. So, what could be causing this and how do I can resolve it? Additional Notes Once double-booted the system seems to work entirely correctly. The graphics card problem has occured on two entirely different motherboards. I do not have the opportunity to test the graphics card in a different computer (I've only the old motherboard, which is dubious, or a really old desktop that still has an AGP port). Under load (ie, modern games for long enough for temperatures to plateau) the system remains stable and performs as expected. The software that came with the new motherboard and SpeenFan both report all voltages and temperatures are within nominal bounds, when idle and when under load. I've looking over the BIOS settings for my motherboard multiple times and can find nothing that helps. This system is configured to run with everything at standard levels - no overclocking. I've tried booting the system with only the mobo and graphics card connected (thinking maybe my new PSU was too weak for the new gfx card, even though it meets the quoted PSU requirements for the card) but the same problem persists (and really if the PSU was weak I'd have problems with the system under load). When the gfx card does not initialise the fan on its cooling unit is running, possibly slower than otherwise - but this measurement is by eye and so unreliable.

    Read the article

  • Apache outputs all urls of a second domain as a subfolder of the primary domain name

    - by s_rathbone
    Hi all, would anyone be able to possibly give me some guidance.. Basically, i have a 'shared hosting' account with a large internet hosting provider, and my account lets me have multiple seperate domains within this folder structure.(note: not aliased domains and not sub domains). so, my goal is to have 2 domains set up. i have already purchased the two domain names i need: The first domain is the 'primary' domain name for the root folder(eg. www.example1.com) and the second domain name is set for one of its sub folders(eg. www.example2.com is set to the folder www.example1.com/sites/music). The problem is that when apache returns a page of the second domain back to the browser, apache writes the hyperlinks as if it's a sub folder of the first domain ( eg. www.example2.com/index.html. comes out as http://www.example1.com/sites/music/index.html). Now, I have done some reading on this, looking though "Apache: the definitive guide"(o'reilly), and although it was useful, couldn't really find the answer. i'm guessing this issue is most likely an apache setup issue in http.conf, rather than an issue with the hosting company itself (which is why im posting it here) and I have also been to the official documentation for apache site, and i am guessing i might need to use something like the rewritebase directive in htaccess files.. but im really not sure, im more of a java programmer guy, and have been struggling with this for a couple of days. Any guidance would be REALLY appreciated. If it helps, my hosting company is godaddy, and my sites are hosted on linux. My problem was originally with wordpress which i reinstalled a number of times in various ways to correct the problem, but ive just done a test with a very simple static html, and it still has the same issue with relative urls like this: <html> <head></head><body><a href="images/dog.html">Pictures of Dogs</a></body> </html> However, it is fine if i hardcode the urls like this: <html> <head></head><body><a href="http://www.example2.com/images/dog.html">Pictures of Dogs</a></body> </html> Thanks heaps, Steve R NOW FIXED Ok, the problem has now been fixed, and i didn't need to modify any .conf or .htaccess files. The problem was, that when I went to install the second application into a second domain from the godaddy site, one of the setup questions is that it asks you which site you want it installed to. after that it asks for the desired folder path. However, the problem was that the second domain name was already pointing to the correct subfolder of the primary domain. So when I started installing wordpress again and came to the menu to select which site it was for, and it listed only the primary domain as an option, i assumed that this was like a label of "which hosting account?", or "which primary domain will your application will be installed under?" because I already knew that in the next step i was specifiying the folder. In order to correct this, you must make sure that your second domain is added to your domain list so that it will be listed as an option during the installation process. For further details please read tystips.com/archives/52/how2-save-money-host-multiple-wordpress-blogs-on-a-single-godaddy-hosting-account/

    Read the article

  • Windows Backup fails with 0x80070002: "The system cannot find the file specified"

    - by James Johnston
    Windows 7 Backup is failing. When backing up even a single insignificant directory (e.g. I chose only the empty "Contacts" directory, leaving all other directories unchecked), I get this error within a few seconds and the backup fails. If I uncheck all files/directories, and just do the system image - then the system image is backed up OK without issue. Backup destination is an external USB hard drive. Steps to reproduce and subsequent failure: Set up backup to go to external hard drive. Don't back up system image. Back up "Contacts" directory only for my profile. Start backup. Immediately view the status of the backup, it stays on "Creating a shadow copy..." for a few seconds, and then the backup fails. Click Options button, and it says "Check your backup / The system cannot find the file specified." - with options to "Try to run backup again" or "Change backup settings". If I click "Show Details", then it says: Backup time: 4/12/2012 04:38 Backup location: My Book (D:) Error code: 0x80070002 An examination of the Event Log shows nothing useful beyond the following: Log Name: Application Source: Windows Backup Date: 4/12/2012 04:38:44 Event ID: 4104 Task Category: None Level: Error Keywords: Classic User: N/A Computer: JTJLaptop Description: The backup was not successful. The error is: The system cannot find the file specified. (0x80070002). Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Windows Backup" /> <EventID Qualifiers="0">4104</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-04-12T04:38:44.000000000Z" /> <EventRecordID>23979</EventRecordID> <Channel>Application</Channel> <Computer>JTJLaptop</Computer> <Security /> </System> <EventData> <Data>The system cannot find the file specified. (0x80070002)</Data> <Binary>02000780E30500003F0900005B090000420ED1665C2BEE174B64529CB14610EA71000000</Binary> </EventData> </Event> What I have tried: ChkDsk on both C: (main drive) and D: (backup drive) doesn't find any errors. Running SFC /SCANNOW to run system file checker Checked the list of profiles at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList and ensured that each profile directory exists. I'm stumped; WHAT file can't be found and why is my backup failing? This is on a Lenovo T420 laptop.

    Read the article

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • eth0 and eth1 both assigned same IP on boot

    - by Banjer
    I have a physical SLES 11 SP2 server on a Sun Fire x4140 that is giving me problems with networking upon reboot. The NICs are onboard. The networking appears successful during boot, but network services such as nfs fail hard. This is because eth0 and eth1 are both receiving the same configuration and are both ifup-ed. Once everything times out and I'm at the console, ifconfig shows that eth0 and eth1 are UP and running with the same IP. Attempting to ping anything in that subnet fails. Restarting the network service fixes the issue. eth0 is the correct NIC that should be configured as primary, per the MAC address. Question: Whats causing eth1 to be brought up with the same config as eth0?? I do not have a config script set up for eth1: banjer@harp:~> ls -la /etc/sysconfig/network/ total 104 drwxr-xr-x 6 root root 4096 Jun 11 12:21 . drwxr-xr-x 6 root root 4096 Apr 10 09:46 .. -rw-r--r-- 1 root root 13916 Apr 10 09:32 config -rw-r--r-- 1 root root 9952 Apr 10 09:36 dhcp -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth0 -rw------- 1 root root 180 Jun 11 12:21 ifcfg-eth3 -rw------- 1 root root 172 Feb 1 08:32 ifcfg-lo -rw-r--r-- 1 root root 29333 Feb 1 08:32 ifcfg.template drwxr-xr-x 2 root root 4096 Apr 10 09:32 if-down.d -rw-r--r-- 1 root root 239 Feb 1 08:32 ifroute-lo drwxr-xr-x 2 root root 4096 Apr 10 09:33 if-up.d drwx------ 2 root root 4096 May 5 2010 providers -rw-r--r-- 1 root root 25 Nov 16 2010 routes drwxr-xr-x 2 root root 4096 Apr 10 09:36 scripts On a side note, eth3 is also configured with an IP in a different subnet, but this has not posed any problems. FYI the kernel module being used is forcedeth. banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.21.64.25/20' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Here's eth3 in case you need to see it: banjer@harp:~> sudo cat /etc/sysconfig/network/ifcfg-eth3 BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='172.11.200.4/24' MTU='' NAME='MCP55 Ethernet' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' USERCONTROL='no' ONBOOT="yes" Perhaps is something related to udev? 70-persistent-net-rules looks OK to me, but I may not understand it completely. banjer@harp:~> cat /etc/udev/rules.d/70-persistent-net.rules # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4a", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4b", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:18:4f:8d:85:4d", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3" # PCI device 0x1077:0x3032 (qla3xxx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:c1:dd:0e:34:6c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth4" Any other thoughts on what would cause this?

    Read the article

  • Determining cause of high NFS/IO utilization without iotop

    - by Matt
    I have a server that is doing an NFSv4 export for user's home directories. There are roughly 25 users (mostly developers/analysts) and about 40 servers mounting the home directory export. Performance is miserable, with users often seeing multi-second lags for simple commands (like ls, or writing a small text file). Sometimes the home directory mount completely hangs for minutes, with users getting "permission denied" errors. The hardware is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for the home directories. What I find curious, and suspicious, is that the sda device often has very high utilization, even though it only contains the OS. I would expect this virtual drive to be idle almost all the time. The system is not swapping, according to "free" and "vmstat". Why would there be major load on this device? Here is a 30-second snapshot from iostat: Time: 09:37:28 AM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sdb 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 sdb1 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 dm-0 0.00 0.00 0.03 151.82 0.13 607.26 8.00 1.25 8.23 5.16 78.35 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 dm-3 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 Looks like iotop is the ideal tool to use to sniff out these kinds of issues. But I'm on CentOS 5.6, which doesn't have a new enough kernel to support that program. I looked at Determining which process is causing heavy disk I/O?, and besides iotop, one of the suggestions said to do a "echo 1 /proc/sys/vm/block_dump". I did that (after directing kernel messages to tempfs). In about 13 minutes I had about 700k reads or writes, roughly half from kjournald and the other half from nfsd: # egrep " kernel: .*(READ|WRITE)" messages | wc -l 768439 # egrep " kernel: kjournald.*(READ|WRITE)" messages | wc -l 403615 # egrep " kernel: nfsd.*(READ|WRITE)" messages | wc -l 314028 For what it's worth, for the last hour, utilization has constantly been over 90% for the home directory drive. My 30-second iostat keeps showing output like this: Time: 09:36:30 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sdb 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 sdb1 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 dm-0 0.00 0.00 0.20 17.76 0.80 71.04 8.00 0.38 21.21 9.22 16.57 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.81 dm-3 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.82

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • iptables DNS resolution

    - by Favolas
    I have a virtual machine with Fedora 19 acting as a router. This machine as an interface (p8p1) with the IP 172.16.1.254 that is connected to another machine (IP 172.16.1.1) that's simulating the external network. I've installed snort 2.9.2.2, applied the snortsam-2.9.2.2.diff.gz patch and installed snortsam 2.70 on the routermachine In snort.conf besides altering some RULE_PATH I believe I've only added the following line to the file. output alert_fwsam: 127.0.0.1:898/password After doing this two comands: ifconfig p8p1 promisc /usr/local/snort/bin/snort -v -i p8p1 If I ping from the external network to the router IP, I can see the info about the pings. One of the rules that I have is icmp-info.rules that as this single line: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP-INFO Echo Reply"; icode:0; itype:0; classtype:misc-activity; sid:408; rev:6;fwsam: src, 5 minutes;) snortsam.conf as this data: defaultkey password accept localhost keyinterval 30 minutes dontblock 192.168.1.1 # rede local rollbackhosts 50 rollbackthreshold 20 / 30 secs rollbacksleeptime 1 minute logfile /var/log/snort/snortsam.log loglevel 3 daemon nothreads # linha importante para gerar os bloqueios via iptables iptables p8p1 LOG bindip 127.0.0.1 Now I run this command: /usr/local/snort/bin/snort -u snort -i p8p1 -c /etc/snort/snort.conf -l /var/log/snort -Dq Terminal gives this message: Spawning daemon child... My daemon child 2080 lives... Daemon parent exiting (0) and when I runsnortsam in terminal i got this: SnortSam, v 2.70. Copyright (c) 2001-2009 Frank Knobbe . All rights reserved. Plugin 'fwsam': v 2.5, by Frank Knobbe Plugin 'fwexec': v 2.7, by Frank Knobbe Plugin 'pix': v 2.9, by Frank Knobbe Plugin 'ciscoacl': v 2.12, by Ali Basel <[email protected]> Plugin 'cisconullroute': v 2.5, by Frank Knobbe Plugin 'cisconullroute2': v 2.2, by Wouter de Jong <[email protected]> Plugin 'netscreen': v 2.10, by Frank Knobbe Plugin 'ipchains': v 2.8, by Hector A. Paterno <[email protected]> Plugin 'iptables': v 2.9, by Fabrizio Tivano <[email protected]>, Luis Marichal <[email protected]> Plugin 'ebtables': v 2.4, by Bruno Scatolin <[email protected]> Plugin 'watchguard': v 2.7, by Thomas Maier <[email protected]> Plugin 'email': v 2.12, by Frank Knobbe Plugin 'email-blocks-only': v 2.12, by Frank Knobbe Plugin 'snmpinterfacedown': v 2.3, by Ali BASEL <[email protected]> Plugin 'forward': v 2.8, by Frank Knobbe Parsing config file /etc/snortsam.conf... Linking plugin 'iptables'... Checking for existing state file "/var/db/snortsam.state". Found. Reading state file. Starting to listen for Snort alerts. and snortsam.log as an entry like this 2013/10/25, 10:15:17, -, 1, snortsam, Starting to listen for Snort alerts. Now, from the external machine I do ping 172.16.1.254 and it starts showing the info and an alert file is created in /var/log/snort/ that as the info about the PINGS. Something like: [**] [1:408:6] ICMP-INFO Echo Reply [**] [Classification: Misc activity] [Priority: 3] 10/25-10:35:16.061319 172.16.1.254 -> 172.16.1.1 ICMP TTL:64 TOS:0x0 ID:38720 IpLen:20 DgmLen:84 Type:0 Code:0 ID:1389 Seq:1 ECHO REPLY Also, if I run instead /usr/local/snort/bin/snort snort -v -i p8p1 i got this message: Running in packet dump mode --== Initializing Snort ==-- Initializing Output Plugins! Snort BPF option: snort pcap DAQ configured to passive. The DAQ version does not support reload. Acquiring network traffic from "p8p1". ERROR: Can't set DAQ BPF filter to 'snort' (pcap_daq_set_filter: pcap_compile: syntax error)! Fatal Error, Quitting.. So, this are my questions: Shouldn't snortsam block the PING? Is that DAQ error causing the problem? If so, How can I solve it?

    Read the article

< Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >