Search Results

Search found 17982 results on 720 pages for 'computed values'.

Page 243/720 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Resource Monitor (resmon) in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums? (the graph on the right hand side of the screenshot below):

    Read the article

  • BASH Wildcard Expansion

    - by Aaron Copley
    I'm not really sure how to phrase this, and maybe that's why I can't find any thing, but I want to reuse the values enumerated by a wildcard in a command. Is this possible? Scenario: $ ls /dir 1 2 3 Contents of /dir are directories 1, 2, and 3. $ cp /dir/*/file . Results in file being copied from /dir/1 /dir/2 and /dir/3 to here. What I would like to do is copy the files to a new destination name based on the wildcard expansion. $ cp /dir/*/file ???-file Would result in /dir/*/file being copied to 1-file, 2-file, and 3-file. What I can't figured out is the ??? portion to tell BASH I want to use the wildcard expanded values. Using the wildcard in the target nets a cp error: cp: target `*-file' is not a directory. Is there something else in bash that can be used here? The find command has {} to use with -exec which is similar to what I am looking for above.

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • Is there a free XML viewer to do "pivot tables"

    - by FumbleFingers
    I have an *.xml file on a PC running Windows XP, with a structure something like... <movies id="my movies" <movie name="Unforgiven" <people star="Clint Eastwood" director="Clint Eastwood"/> </movie> <movie name="A Fistful of Dollars" <people star="Clint Eastwood" director="Sergio Leone"/> </movie> <movie name="J Edgar" <people star="Leonardo DiCaprio" director="Clint Eastwood"/> </movie> </movies> I want to open this file in a viewer utility which will show one line per movie, filtered by a condition such as director="Clint Eastwood", including the relevant value for movie name on each line. Note - that's just an example. My actual file has thousands of lines, each possibly several hundred bytes long, and there are many more levels and named values. The important thing is I want to apply a filter for a specified value of some field at some level. And I want to see one line for every case where that value occurs, showing the values for any fields I specify, even if they're stored at higher levels. I may be mistaken in saying what I want is a "pivot table" - I don't know what I should call it.

    Read the article

  • PHP memory_limit local value does not match php.ini value

    - by Buttle Butkus
    CentOS system. Summary: changed memory_limit in master and local php.ini and yet no change in the local value for a particular virtual host. Trying to improve performance, I set the memory_limit to 1024M in /etc/php.ini phpinfo() shows Master and Local values for other virtual hosts on the server as 1024M. Changing the value in /etc/php.ini changes all values, except one. One site is stuck with a local value of 256M. I thought I found the problem: there is a php.ini file (which I didn't know about) in that site's root, and it had memory_limit = 256M I changed it to 1024M. Problem solved? No. And now I don't know where to look. Obviously, I've restarted apache (/etc/init.d/httpd restart), and that usually does the trick. I also turned off APC cache, though I don't think it would cache ini files. And finally, I tried adding this to the virtual host in httpd.conf: php_value memory_limit 536870912 (yes, that would be 5 GB) And that had no effect whatsoever. What else could be the problem? Thanks.

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Ways of marking a total match

    - by user331898
    I have two columns of matched data. One column contains the ID and the other column contains if there was a match(1) or no match(0) with that ID. There would be times when the all rows with the same ID will have all matched values of 1 and there would times where there were a mix of 0 and 1. I would like a third column to indicate where I have the same ID and all matched values are 1. Sample of what I have below column number and title of column: COLUMN 1: ID COLUMN 2: Match=1,No Match=0 1 1 1 0 2 1 2 1 3 0 3 0 3 1 This is what I would like: COLUMN # & TITLE COLUMN 1:ID COLUMN 2: Match=1, No Match=0 COLUMN 3: All ID Match & Match=1 1 1 N 1 0 N 2 1 Y 2 1 Y 3 0 N 3 0 N 3 1 N Is there a formula or way in excel 2010 that would make this possible? I would still like to keep the rows intact. Appreciate your help. Thank you in advance.

    Read the article

  • network will not work properly after having run TCP optimizer, but safe mode settings work perfectly. how to restore?

    - by michele
    I was experiencing some issues with my connection while playing online and I tried to optimize it by running TCP optimizer on my PC (Windows 7 64bit professional). I thought maybe the situation could improve. but it didn't. actually, I now get an extremely slow page loading time, probably due to a very low RWIN value of 1024. I understand that Windows 7 has a system to automatically adjust the RWIN value when needed. The setting from netsh is "normal" so I guess something else must be wrong. I tried every automatic tool out there to restore Windows' default values, but I had no success. I currently have what should be labeled as "default values" for everything TCP Optimizer initially changed, but the problem persists. The thing is, I just found out that running Windows in safe mode SOLVES the problem completely. The problem is that as soon as I reboot, I get the same issue all over again. So my question is: is there a way to use SAFE MODE network settings in NORMAL mode?

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] 05/14 - Copy sent to John Public and [email protected] Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • Nginx : Proper use of limit_req_zone and limit_req

    - by xperator
    I have 2 website running on VPS. Their purpose is sharing music files and publishing news. Both of them use wordpress. What I am trying is that I want to prevent little hackers from flooding the webserver and putting stress on the server to make it crash. The problem is that after using limit_req_zone and limit_req my website became very slow. Browsing Wordpress control panel takes a long long time. I tried changing values but it didn't improve much. I guess the problem is Wordpress because it's the only script I am using on both front and back end. Here is the last setting which seems to be more responsive than others : limit_req_zone $binary_remote_addr zone=flood:5m rate=10r/m; location ~ \.php$ { limit_req zone=flood burst=100 nodelay; } What are the optimal values that should be used in my case (wp) ? I want the website have it's normal behavior, On the other hand stopping lifeless people from flooding. Another question, Is it safe and enough to use limit_req only on php files ?

    Read the article

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Auto Log-Off Windows users - Windows 2003 domain

    - by thehatter
    Hi! I am trying to make windows clients automatically log off after some time, I have been trying to use the winexit.scr which I have seen working else where in a similar environment. After working though these instructions (I did read the comments and notice the original ADM provided is buggy) I've had no joy what so ever! Winexit.scr refuses to read any settings in the registry, even while using a test account I can access the required reg key(s); edit, add, and remove values. Essentially winexit.scr always uses it's default values: 30 second timeout, no forced log-out. What I really want is a 30 minute timeout with a forced log-out, closing all the users apps etc. I've tried removing and re-adding the ADM template, creating the GPO from scratch several times, giving various registry permissions - including full control to "Everybody" just for fun! Oh, clients are all win XP SP3, DC is win 2003 R2 SP2. So, can anybody suggest something? Cheers!

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Archive Manager, SQL 2005 and MaxTokenSize high CPU

    - by Tim Alexander
    So, I posted this question a few days ago: Impact of increasing the MaxTokenSize for Kerberos Tickets Since then the thought was to test our settings on two member servers, one with IIS and one without. I setup two GPOs to configure the MaxTokenSize reg setting to 48000 and MaxFieldLength/MaxRequestBytes to 64200 (based on MS KB2020943, these are set at 4/3 * T + 200). The member server seemed to work ok (a devalued tape backup server). The IIS server however has had some strange repercussions. The IIS Sserver host Quest Software Archive Manager (AM) 4.5 that communicates with SQL Server 2005 Enterprise on Server 2003 R2. After the changes all looked good until the SQL Server hit 100% CPU. I have removed the GPOS, removed the reg values and even replaced them with defaults (12000 for token size and can't remember the other one but was in a blog post about the issue in my other post). No change. Bouncing the IIS Server stops the high CPU and a colleague has looked at the SQL server and it is definitely the AM connection taking up the time/work on the SQL server. I haven't changed the reg values on the SQL server or the DCs but am reluctant to do so without understanding why this has happened. I am guessing its to do with the overriding auth and group issue we have but I am not seeing Kerberos errors in either event log. Has anyone seen something similar or does anyone have some tips? Was definitely blindsided by the Kerberos issue and am swimming against the tide to keep things functioning.

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • Ruby net:LDAP returns "code = 53 message = Unwilling to perform" error

    - by Yong
    Hi, I am getting this error "code = 53, message = Unwilling to perform" while I am traversing the eDirectory treebase = "ou=Users,o=MTC". My ruby script can read about 126 entries from eDirectory and then it stops and prints out this error. I do not have any clue of why this is happening. I am using the ruby net:LDAP library version 0.0.4. The following is an excerpt of the code. require 'rubygems' require 'net/ldap' ldap = Net::LDAP.new :host => "10.121.121.112", :port => 389, :auth => {:method => :simple, :username => "cn=abc,ou=Users,o=MTC", :password => "123" } filter = Net::LDAP::Filter.eq( "mail", "*mtc.ca.gov" ) treebase = "ou=Users,o=MTC" attrs = ["mail", "uid", "cn", "ou", "fullname"] i = 0 ldap.search( :base => treebase, :attributes => attrs, :filter => filter ) do |entry| puts "DN: #{entry.dn}" i += 1 entry.each do |attribute, values| puts " #{attribute}:" values.each do |value| puts " --->#{value}" end end end puts "Total #{i} entries found." p ldap.get_operation_result Here is the output and the error at the end. Thank you very much for your help. DN: cn=uvogle,ou=Users,o=MTC mail: --->[email protected] fullname: --->Ursula Vogler ou: --->Legislation and Public Affairs dn: --->cn=uvogle,ou=Users,o=MTC cn: --->uvogle Total 126 entries found. OpenStruct code=53, message="Unwilling to perform"

    Read the article

  • OpenSSL error while running punjab

    - by Hunt
    i ran punjab - BOSH connection manager - using twistd -y punjab.tac command in my centos but i am getting following error Unhandled Error Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 652, in run runApp(config) File "/usr/local/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 386, in run self.application = self.createOrGetApplication() File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 451, in createOrGetApplication application = getApplication(self.config, passphrase) --- <exception caught here> --- File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 462, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/local/lib/python2.7/site-packages/twisted/application/service.py", line 405, in loadApplication application = sob.loadValueFromFile(filename, 'application', passphrase) File "/usr/local/lib/python2.7/site-packages/twisted/persisted/sob.py", line 210, in loadValueFromFile exec fileObj in d, d File "punjab.tac", line 39, in <module> '/etc/pki/tls/cert.pem', File "/usr/local/lib/python2.7/site-packages/twisted/internet/ssl.py", line 68, in __init__ self.cacheContext() File "/usr/local/lib/python2.7/site-packages/twisted/internet/ssl.py", line 78, in cacheContext ctx.use_privatekey_file(self.privateKeyFileName) OpenSSL.SSL.Error: [('x509 certificate routines', 'X509_check_private_key', 'key values mismatch')] Failed to load application: [('x509 certificate routines', 'X509_check_private_key', 'key values mismatch')] my configuration file of punjab is sslContext = ssl.DefaultOpenSSLContextFactory( '/etc/pki/tls/private/ca.key', '/etc/pki/tls/cert.pem', ) How can i resolve above error

    Read the article

  • I can't get my PC to start up by a normal way.

    - by ssice
    I couldn't write a more accurate title. I am just unable to start the computer by pressing the Power On button. I checked the Power Supply and it seems to give good voltage values in every pin. And this is not a BIOS malfunction because of bad overclocking or anything that may come to your mind. And I will tell you why. It happens that EPS (or any ATX-based) power supply has the ability to be powered-on by the Motherboard by jumping the 13th pin of the 24-pin-ATX-connector to COM/GND. I did it, after pushing the power on button (without any visual response) and, pwhaa! The machine turned on. I was able to read (and even write, if I wanted) BIOS values and then start any OS installed. Machine starts, so it's not any kind of misconfiguration. It seems some hardware related. I am able to power the machine on only if I already pushed the power on button. Though pushing it without jumping the 13th pin to ground for a second does not power the machine. Of course, jumping the pin without pushing the power on button does not tell the motherboard anything, so the computer would not start up either. It's as if the logic that connects the power button with the 13th pin derivation to GND was unable to be activated. What can be the issue? How can I solve it? My configuration is as follows: CPU: AMD Phenom 9850 X4 Black Edition MB: ASUS Formula II AM2 RAM: 2x2GB Corsair Dominator 5-5-5-15 2T @ 1066MHz DDR2 Tested also with only 1 module GPU: 2x XFX nVIDIA GeForce 9600 GT XxX Alpha Dog Edition @ Core: 540Mhz [SLi] Power Supply: Xilence 700W (ATX 12V 2.3 / EPS 12V 2.92 compatible) PS: I know the machine is like 2 years old. I hardly use it now, but my parents do.

    Read the article

  • Remove an apache alias subdirectory

    - by Hippyjim
    I'm using Apache 2 on Ubuntu 12.04. I added an alias for a subdirectory, to point to gitweb. I realised I should probably make it accessible only on https - so I removed the alias and restarted Apache. I can still navigate to http://xyz/gitweb - even with no alias in any of my config files. How do I remove it? EDIT The config file looked like this before: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Alias /gitweb/ /usr/share/gitweb/ <Directory /usr/share/gitweb/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgiDirectory Index gitweb.cgi </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> And this after: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >