Daily Archives

Articles indexed Friday September 7 2012

Page 7/17 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • What's the straightforward way to implement one to many editing in list_editable in django admin?

    - by Nate Pinchot
    Given the following models: class Store(models.Model): name = models.CharField(max_length=150) class ItemGroup(models.Model): group = models.CharField(max_length=100) code = models.CharField(max_length=20) class ItemType(models.Model): store = models.ForeignKey(Store, on_delete=models.CASCADE, related_name="item_types") item_group = models.ForeignKey(ItemGroup) type = models.CharField(max_length=100) Inline's handle adding multiple item_types to a Store nicely when viewing a single Store. The content admin team would like to be able to edit stores and their types in bulk. Is there a simple way to implement Store.item_types in list_editable which also allows adding new records, similar to horizontal_filter? If not, is there a straightforward guide that shows how to implement a custom list_editable template? I've been Googling but haven't been able to come up with anything. Also, if there is a simpler or better way to set up these models that would make this easier to implement, feel free to comment.

    Read the article

  • Monodroid and CI-Servers

    - by Tobias Schittkowski
    I would like to automate my test and release process of my Monodroid app via Jenkins. I found some infos for using Jenkins with "normal" Android projects: https://jenkins-ci.org/content/getting-started-building-android-apps-hudson http://androiddevresources.com/blog/2012/04/01/building-an-android-app-with-jenkins/ Has anyone experience on building a Monodroid app on Jenkins and running nunit tests? Are there some ready-to-modify scripts?

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • why dstat failed with --tcp option?

    - by hugemeow
    why --tcp option failed? should i install some package? if yes, which package i should install? this question is not the same as http://stackoverflow.com/questions/10475991/tcp-sockets-double-messages dstat --tcp Module tcp has problems. (Cannot open file /proc/net/tcp6.) None of the stats you selected are available. what is the problem with module tcp, how to find the problem? what i should do in order to fix this issue?

    Read the article

  • How to install Vips on CentOS?

    - by A4J
    I am trying to install Vips on my CentOS box I've downloaded the latest files: wget http://www.vips.ecs.soton.ac.uk/supported/current/vips-7.30.0.tar.gz wget http://www.vips.ecs.soton.ac.uk/supported/current/nip2-7.30.1.tar.gz And on ./configure I get: configure: error: Package requirements (glib-2.0 >= 2.6 gmodule-2.0 >= 2.4 libxml-2.0 gobject-2.0) were not met: No package 'glib-2.0' found No package 'gmodule-2.0' found No package 'gobject-2.0' found However I have tried yum install glib2 and rerun ./configure but get the same error. Am I doing something wrong? Anyone know how to install it correctly on CentOS 6?

    Read the article

  • Remote Desktop Services Gateway Issue

    - by AVandelay05
    Alright fellow techies here's the rundown. I have installed Server 2008 r2 Remote Dekstop Services on a VM in my network. I installed the following RD role services: RD Session Host, Licensing, Connection Broker, Gateway, Web Access. When I set things up originally, the gateway server and RDWeb worked as it should locally. After getting things running locally (remoteserver.domainname.local) I wanted to test things externally. From the outside, I couldn't get things running (meaning I could connect to rdweb access externally, but when I tried to run an app I would get the message "can't connect/find computer"). Here's my setup for external access The VM has every RD Services role services installed on it, meaning it acts as gateway, rd web access, session host, licensing, the whole bit. I made a self-signed certificate on the gateway server (gateway.domainname.net is the cert name). Internally, I have a secondary forward-lookup zone called domainname.net with an A record gateway pointing to the local IP of the gateway server. On our public DNS (domainname.net) I have an A record gateway. This is to access the RDWeb externally. In IIS I have the following authentication settings RDWeb: All disabled except for anonymous authentication Rpc: All disabled except for basic and windows RpcWithCert: All disbled except for windows authentication I have the necessary web access config in our sonicwall tz210 (https and rdp, external ip pointing to local ip of rds server) RAP and CAP have the correct user and computer groups, authentication, and allowed devices After all of this, here's what happens accessing externally. I can login correctly to RDWeb Access (I've tried a bogus login, I can't login to it so that's working properly). I see the Apps for use. I click on an app, click connect, the credential window opens, I put in the correct user creds, it tries to connect to the gateway server, but then the cred window comes back in view. I tried to reach a limit of failed logins, but never reached one, haha. So from the same external client machine I try to connect to the gateway through a Remote Desktop connection. I put in the correct gateway settings in the RD window, try to connect and get the same results as I did in RDWeb access. I checked the event logs on the RD Services machine and saw the following event IDs around the time I tried to login externally: ID 6037 with the message "The program svchost.exe, with the assigned process ID 2168, could not authenticate locally by using the target name host/gateway.domainname.net. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name. Try a different target name." ID 10 RADWebAccess "RD Web Access was unable to access gateway.domainname.net, which is the server that is specified as running the RemoteApp and Desktop Connection Management service. Ensure that the computer account of the RD Web Access server is a member of the TS Web Access Computers security group on gateway.domainname.net" ID 4625 "An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Administrator Account Domain: gateway.domainname.net Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: USER-LAPTOP Source Network Address: External IP Source Port: 63125 Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols." I don't think the VM has a null SID. The SID of the VM and it's physical host have different SIDS. I can access the blank page for rpc externally using the external gateway name. It seems like authentication is a problem. Also, is it a problem that the external name of the gateway server doesn't match the local name? The external name (which the cert is based on) is gateway.domainname.net and the internal name is remoteserver.domainname.local. That's the only thing I can think of that would be the problem, but the external name has to be different from the local right? Internally, I ping gateway.domainname.net and it gives me the correct local IP of the server. Now, there isn't an actual computer name in AD, but I don't know how I would achieve that? I hope I've been clear....any help would be appreciated. I think I'm close to achieving this. :)

    Read the article

  • How can I systematically shut down Windows services in order?

    - by cbmeeks
    We have this open source application that has three (3) services. For the purpose of this question, let's call them A, B, and C. Now, they have to be shut down in a specific order. That order would be A then B then C. If B shuts down before A then we run into all kinds of problems. Same is true if C shuts down before B or A. Plus, each service can take a different amount of time to shut down due to how many users were using it. Oh, this need to be wrapped up in a DOS batch file or something a non-techy user could just double-click to initiate. (PowerShell is not out of the question but I've never used it). Also, I'm a C# coder so that could be used too. Anyway, when the restart is initiated the following needs to happen: 1) Initiate shutdown of service A 2) When service A is down, it should trigger the shutdown of service B 3) When service B is down, it should trigger the shutdown of service C 4) When service C is down, it should trigger the START UP of service A 5) When service A is UP, it should trigger the START UP of service B 6) When service B is UP, it should trigger the START UP of service C So as you can see, each stop/start needs to wait on the previous to be completely finished before moving on. And since each service can take a few seconds to a few minutes, we can't use any kind of timing tricks. Suggestions greatly appreciated. Thanks!

    Read the article

  • Replace IIS 403 with 404 for Directory Listing

    - by dahlbyk
    Is it possible to have IIS (6 or 7.5) return a 404 Not Found (instead of 403 Forbidden) when a disallowed directory listing is requested? A security scanning service I use thinks the 403 is revealing something "potentially sensitive", when in fact it's just not a valid URL. My workaround is to drop a default.aspx into each directory that returns an empty 404 page, but there has to be a better way...

    Read the article

  • Sending emails with Thunderbird + Postfix + Zarafa does not work

    - by Sven Jung
    I installed zarafa on my vserver and use as MTA postfix. The webaccess works fine, I can revceive and send emails, also receiving mails with thunderbird (IMAP ssl/tls) works. But there is a problem, sending emails with thunderbird. I established an account in thunderbird with imap ssl/tls connection which works finde, and a starttls smtp connection on port 25 for the outgoing mail server. If I try to send an email with thunderbird I get an error: 5.7.1 Relay access denied this is my mail.log Sep 7 16:10:07 postfix/smtpd[6153]: connect from p4FE06C0A.dip.t-dialin.net[79.224.110.10] Sep 7 16:10:08 postfix/smtpd[6153]: NOQUEUE: reject: RCPT from p4FE06C0A.dip.t-dialin.net[79.224.110.10]: 554 5.7.1 <[email protected]>: Relay access denie$ Sep 7 16:10:10 postfix/smtpd[6153]: disconnect from p4FE06C0A.dip.t-dialin.net[79.224.110.10] and this my /etc/postfix/main.conf # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache virtual_mailbox_domains = firstdomain.de, seconddomain.de virtual_mailbox_maps = hash:/etc/postfix/virtual virtual_alias_maps = hash:/etc/postfix/virtual virtual_transport = lmtp:127.0.0.1:2003 myhostname = mail.firstdomain.de alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = ipv4 I don't know what to do, because actually sending mails to internal and external addresses works with the webaccess. Perhaps somebody can help me?

    Read the article

  • gunzip unexpected end of file

    - by Mark J Seger
    I like to write high volume data directly to a zip file via perl. Works like a champ! However, on some rare occasions I've found that gunzip can't uncompress them declaring an unexpected end-of-file. The thing is I can uncompress it with a simple perl script, which probably misses the corrupted record(s) at the end. My question is, does anyone know of a way to use a standard utility to do the same thing? Maybe with a 'compress-as-much-as-you-can' switch? -mark

    Read the article

  • My network manager stop running very shotly when installed VirtualBox-4.0, reporting network manager not running

    - by wangolo joel
    I fell into a snare when i downloaded and install virtual box 4.0 on my ubuntu 10.4, when i started the virtualbox i tried i saw that my network icon on my desktop showed my red x showing iam disconnected and this happened shortily when i installed virtual box 4.0 I have searched through the web and found some people who have faced some problem. but the solution they gave them i tried but it coudn't work, of restarting,starting ,stoping the netowrk manager, i was even thinking of uninstalling this virutalbox but not netowrk connection truely i need your help

    Read the article

  • Send nginx X-Accel-Redirect request from remote server

    - by phingage
    I have 2 server first (domain.com) is a django/apache server, second (f1.domain.com) is a file server (nginx) where some files are protected and should be allow download only to registred user, so i have setup a nginx server with a server { listen 80 default_server; server_name *.domanin.com; access_log /home/domanin/logs/access.log; location /files/ { internal; root /home/domanin; } } and from django I send a request via X-Accel-Redirect header, but dosen't work i think because come from a remote server, how can i accomplish my task? regards!

    Read the article

  • Can i change the subnet on the vpn side of a server without having to change its whole lan (to avoid collission)

    - by Gusty
    Ohai, ive got a xp server with a client connecting via VPN. Problem (as we all know) is that sometimes the subnets clash. Instead of changing the whole server network every time this happens, cant i just have the server appear to have a different subnet to vpn clients? I have no interest in accessing other computers than the server on its LAN. I tried just turning off automatic dhcp in the servers vpn settings and changed it to 172.31.255.x. The client gets the address assigned alright and it can ping 172.31.255.1 but i cannot ping the server name (because the dns on the clients vpn connection points at 192.168.0.1?) wisdom? PS everything worked until the client hit a network with the same subnet as the server lan. thanks

    Read the article

  • IIS 7.5 Site being redirected from hostname to IP

    - by TuxOtaku
    So here's the problem. I have a site in IIS that is being redirected from the site's hostname to its IP address. The problem is, I haven't even set up redirects at all for the site; and yet when I analyze the headers that come through as the page loads, I see clear as day, "302 Temporary Redirect". What could be causing this? I thought perhaps it was something in my application's DB (it's a PHP/MySQL application), but I have ruled that out. I also thought that it might be a rewrite rule somewhere, so I deleted all my rewrite rules as well.

    Read the article

  • How to set up Dell OMSA tools on Debian 6 (Squeeze) (PE2950)

    - by javano
    I assume I have set up them up incorrectly or am missing a library or dependency. When I log into the OMSA web interface I can't see anything Also, omreport tells me nothing; root@box:~# omreport storage controller No controllers found I assume these two will use the same source of information so what ever is wrong will fix them both. I set up OMSA as per these instructions. Also I have compiled MegaCLI (as this is a PowerEdge 2950 with a Perc 6/i controller) and I have used that to update the RAID firmware, so that works, but the Dell tools aren't. What have I missed during set up? root@kvm1:~# cat /etc/issue Debian GNU/Linux 6.0 \n \l root@kvm1:~# uname -a Linux kvm1.mivoice.cust.vostron.net 3.4.9 #1 SMP Wed Aug 22 19:08:46 BST 2012 x86_64 GNU/Linux

    Read the article

  • centos 6.3 kvm external ip forwarding to guests

    - by user1111702
    I have a centos 6.3 server with kvm installed. The server has 4 external ips and one NIC. 176.9.xxx.xx1 176.9.xxx.xx2 176.9.xxx.xx3 176.9.xxx.xx4 I use the following configuration ifcfg-eth0 as slave to ifcfg-br0 the configuration in ifcfg-eth0 is DEVICE=eth0 ONBOOT=yes BRIDGE=br0 HWADDR=14:da:e9:b3:8b:99 and in the ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=176.9.xxx.xxx IPADDR=176.9.xxx.xx1 NETMASK=255.255.255.0 SCOPE="peer 176.9.xxx.xxx" and I have 3 more aliases for br0 , br0:1 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx2 NETMASK=255.255.255.248 ONBOOT=yes br0:2 to get the trafic from the third external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx3 NETMASK=255.255.255.248 ONBOOT=yes br0:3 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx4 NETMASK=255.255.255.248 ONBOOT=yes The above settings work fine and I recieve the trafic from all the external ips. My problem is that I want to pass the trafic from external ip to specific virtual guest on my server. ie trafic that comes from 176.9.xxx.xxx2 must pass to virtual machine 1 176.9.xxx.xxx3 must pass to virtual machine 2 176.9.xxx.xxx4 must pass to virtual machine 3 Can you please help me how to achieve this ? What are the settings on the host and what should I do to the guests. Thank you in advance

    Read the article

  • Pass HAProxy healthcheck requests as User-agent "LB-Check" to the backend webservers(apache)

    - by Joseph
    I have a HAProxy setup in front of webservers(apache) for loadbalancing. Also healthchecks for these webservers are also configured in HAProxy. option httpchk HEAD /healthcheck.txt HTTP/1.0 Is it possible to transfer these healthcheck requests to backend webservers as "LB-Check" User-agent or any other option, so that I can distinguish it from other log entries? However I dont want to go for "dontlog" option, as I dont want to miss these entries.

    Read the article

  • limit_req causing 503 Service Unavailable

    - by Hermione
    I'm frequently getting 503 Service Unavailable when I have limit_req turned on. On my logs: [error] 22963#0: *70136 limiting requests, excess: 1.000 by zone "blitz", client: 64.xxx.xxx.xx, server: dat.com, request: "GET /id/85 HTTP/1.1", host: "dat.com" My nginx configuration: limit_req_zone $binary_remote_addr zone=blitz:60m rate=5r/s; limit_req zone=blitz; How do I resolve this issue. Isn't 60m already big enough? All my static files are hosted on a amazon s3.

    Read the article

  • I want to deploy my php based web application with apache-ant. How can I do that?

    - by codeperl
    I googled it. But unfortunately did not get the specific answer. I am a fan of command line and typing. So now, I want to deploy my php based web application with apache-ant. How can I do that? Also I want to practice these deployment in my local pc. Is it possible? Phing is there and what i heard phing works on the top of apache-ant for php application deployment. But I want to face the hassel and want to write in my own hand.

    Read the article

  • Why are my httpd mpm_prefork processes being reaped so quickly?

    - by Dan Pritts
    We've got a system running RHEL6, x64. We are using a local installation of apache 2.2.22 from source. we serve primarily: mod_perl applications (with a local installation of perl 5.16.0) tomcat applications proxied with mod_jk Here is some context; the main question is below. All of this talks to an Oracle backend. We are having issues with Oracle becoming unresponsive. We think this is because we're hitting the maximum process limit in oracle. We've upped the process limit, but now we are hitting memory pressure on the oracle server. We have tons of oracle sessions sitting idle. I can trace a bunch of them back to the httpd processes. We have mod_perl's Apache::DBI start up a new connection to the database with each httpd child that's spawned. We are concerned that these are not always getting closed out properly when the httpd's exit...and the httpd's are exiting very frequently. I know that it would be good to modify the mod_perl applications to use some better form of db connection pooling; we plan to pursue that but would like to solve our immediate problem sooner. So here's the main question. We are using the prefork MPM. The apache child processes are lasting at most a few minutes. Log analysis shows that each one is serving fewer than 50 clients before exiting; the last request each child serves is OPTIONS * HTTP/1.0 on some sort of internal connection; I'm under the impression that this is a "ping" from the master process. I've adjusted the MPM config as follows. I didn't want to raise MinSpareServers too high, because, after all, i'm trying to minimize the number of sessions to oracle. MinSpareServers 5 MaxSpareServers 30 MaxClients 150 MaxRequestsPerChild 10000 Right now we're serving 250-300 requests per minute. We've got 21 httpd's running, the eldest (other than the master, owned by root) being 3 minutes old. This rate of reaping of the apache children really seems excessive. What could be causing it? Apache was built with: $ ./configure --prefix=/opt/apache --with-ssl=/usr/lib --enable-expires --enable-ext-filter --enable-info --enable-mime-magic --enable-rewrite --enable-so --enable-speling --enable-ssl --enable-usertrack --enable-proxy --enable-headers --enable-log-forensic Apache config info: % /opt/apache/bin/httpd -V Server version: Apache/2.2.22 (Unix) Server built: Jul 23 2012 22:30:13 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.4.1 Compiled using: APR 1.4.5, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache" -D SUEXEC_BIN="/opt/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" modules are compiled into apache rather than shared libs: % /opt/apache/bin/httpd -l Compiled in modules: core.c mod_authn_file.c mod_authn_default.c mod_authz_host.c mod_authz_groupfile.c mod_authz_user.c mod_authz_default.c mod_auth_basic.c mod_ext_filter.c mod_include.c mod_filter.c mod_log_config.c mod_log_forensic.c mod_env.c mod_mime_magic.c mod_expires.c mod_headers.c mod_usertrack.c mod_setenvif.c mod_version.c mod_proxy.c mod_proxy_connect.c mod_proxy_ftp.c mod_proxy_http.c mod_proxy_scgi.c mod_proxy_ajp.c mod_proxy_balancer.c mod_ssl.c prefork.c http_core.c mod_mime.c mod_status.c mod_autoindex.c mod_asis.c mod_info.c mod_cgi.c mod_negotiation.c mod_dir.c mod_actions.c mod_speling.c mod_userdir.c mod_alias.c mod_rewrite.c mod_so.c One final note - the red hat httpd, apr, and perl packages are all installed, but ldd shows that none of those libraries are linked with the running httpd.

    Read the article

  • MS Clear Screen Saver doesn't work

    - by ufoq
    I have a problem with really exotic thing - Microsoft Clear Screen Saver. As it's name suggests, it's a screen saver, that's transparent (MS posted it as a part of W2k Resource Kit). When you move the mouse/hit a key, the "lock" dialog appears. I would like to use this to view servers desktops without need to log in. I tested it on my XP's, and it works flawlessy. But on W2K3 servers it doesn't work. After the screensaver timeout, the error message is displayed: "The Clear Screen Saver cannot display the user desktop after the workstation has been locked"

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • some keys don't appear in xev

    - by sazary
    i can't change my screen brightness by pressing brightness keys on the keyboard, but they change through /sys/class/backlight/acpi_video0/brightness. so i started to diagnose the issue, ran xev and trying to see what happens when i press Fn+F5 or Fn+F6 which are brightness controls, and xev didn't show anything. i must note that it notifies my when i press volume controls, for example. are F5 and F6 keys working? yes, but not when i press them with Fn key. do i have any entry in xmodmap for brightness? yes: keycode 232 = XF86MonBrightnessDown NoSymbol XF86MonBrightnessDown keycode 233 = XF86MonBrightnessUp NoSymbol XF86MonBrightnessUp what's happening? and what should i do to correct it? i'm using vaio s series laptop with kubuntu precise on it. thanx ;)

    Read the article

  • CD Drive not discovered

    - by user1009073
    I have a self built computer. it uses a P6T Deluxe motherboard, which has both SATA and IDE ports. This was built several years ago, and had an IDE CD/DVD drive. This drive started going bad (would not burn CDs correctly), so I decided to replace it. I had difficultly finding an IDE DVD drive, so I bought a SATA DVD drive. I opened the comnputer, took out the old DVD drive. I left the IDE cable in place, connected to the motherboard, but it is not connected to any drives. I hooked up the new DVD drive, both power and with a SATA data cable (SATA port 3 if I recall). (Sony Optiarc 24x , Newegg URL: http://www.newegg.com/Product/Product.aspx?Item=N82E16827118067 ) When I power on my computer, the drive does NOT show up in Explorer. I can hit the DVD eject button, and the drive will open up, so I know it at least is getting power. I thought, maybe something in the BIOS. When I go to BIOS, boot devices, it shows (1) floppy, (2) my hard drive (3) ATAPI CD Drive. The only other possible BIOS option I could find was uder 'Storage Configuration'. Configure Storage as: My setting is RAID, since I am using two drives in a RAID configuration. Other options were IDE and ACHI. Other than trying to find an IDE DVD drive, is there anything else I can try? The drive does not show up at all in Windows Explorer. I did put in a CD thinking that might help, but nothing happened. Thanks, GS

    Read the article

  • iPod Touch 2g not recognised by iTunes (more information inside)

    - by Jason
    My iPod Touch isn't being recognised by iTunes. Details: iPod Touch: 2nd generation / Laptop: Windows Vista / iTunes: latest version I have tried: using different cables; restarted my laptop and iPod. reinstalled iTunes. restarting services that were made by Apple; putting my iPod into restore mode; resetting my iPod; and still no luck getting it to be recognised by iTunes. This iPod works on other computers, and different iPod's work on my laptop. When I stopped all of the Apple services on my laptop, I got a message from iTunes when I connected my iPod saying that I needed to enable the services for it to work. But when I did enable the services again, the iPod still didn't show up on iTunes. I can't think what else I can do to solve the problem. Any ideas? Thank you very much in advance!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >