Search Results

Search found 39047 results on 1562 pages for 'process control'.

Page 367/1562 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • There is No Scrum without Agile

    - by John K. Hines
    It's been interesting for me to dive a little deeper into Scrum after realizing how fragile its adoption can be.  I've been particularly impressed with James Shore's essay "Kaizen and Kaikaku" and the Net Objectives post "There are Better Alternatives to Scrum" by Alan Shalloway.  The bottom line: You can't execute Scrum well without being Agile. Personally, I'm the rare developer who has an interest in project management.  I think the methodology to deliver software is interesting, and that there are many roles whose job exists to make software development easier.  As a project lead I've seen Scrum deliver for disciplined, highly motivated teams with solid engineering practices.  It definitely made my job an order of magnitude easier.  As a developer I've experienced huge rewards from having a well-defined pipeline of tasks that were consistently delivered with high quality in short iterations.  In both of these cases Scrum was an addition to a fundamentally solid process and a huge benefit to the team. The question I'm now facing is how Scrum fits into organizations withot solid engineering practices.  The trend that concerns me is one of Scrum being mandated as the single development process across teams where it may not apply.  And we have to realize that Scurm itself isn't even a development process.  This is what worries me the most - the assumption that Scrum on its own increases developer efficiency when it is essentially an exercise in project management. Jim's essay quotes Tobias Mayer writing, "Scrum is a framework for surfacing organizational dysfunction."  I'm unsure whether a Vice President of Software Development wants to hear that, reality nonwithstanding.  Our Scrum adoption has surfaced a great deal of dysfunction, but I feel the original assumption was that we would experience increased efficiency.  It's starting to feel like a blended approach - Agile/XP techniques for developers, Scrum for project managers - may be a better fit.  Or at least, a better way of framing the conversation. The blended approach. Technorati tags: Agile Scrum

    Read the article

  • Sound in Ubuntu 12.10 on Chromebook

    - by Paul Mennega
    I've got an HP Chromebook 14 that I've rooted and installed Ubuntu via Crouton (upgraded to 12.10) and most things are working nicely. One issue that I am having trouble solving relates to sound. I've got sound output that I can control via the individual app (e.g. Youtube), but when trying to control the volume via the built-in xfce4-mixer, I get the following error: GStreamer was unable to detect any sound devices. Some sound system specific GStreamer packages may be missing. It may also be a permissions problem. Opening a terminal, I can run the alsamixer and control the volume levels. It is reporting back that the card is CRAS. I'd love to be able to do it from XFCE and the built-in mixer. I've tried re-installing the GStreamer packages, but no luck. It seems like I am close but not sure where to go from here. On a maybe unrelated note, plugging headphones into the headphone jack in Ubuntu also does not stop the sound from coming from the speakers and nothing comes through the headphones. Switching back to ChromeOs, everything works as expected, so I think that it's a driver issue.

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • Running NginX (with Apache) and cPanel/WHM

    - by ub3rst4r
    I was wondering if its a good idea to be running NginX as the webserver (on port 80) and Apache as the reverse proxy (on port 8080) with cPanel/WHM being used as a control panel? I also installed Nginx Admin so the configuration for NginX is managed by WHM. The reason I am asking is because I came across an article (http://kbeezie.com/view/apache-with-nginx/) which explains how to setup Apache as a reverse proxy but it states If you are using a control panel based hosting such as cpanel/whm, this method is not advised. Most of the servers configuration is handled automatically in those cases, and making manual changes will likely lead to problems (plus you won’t get support from the control panel makers or your hosting provider in most cases). Anyone have any past experiences with this and can say if its good/bad idea?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • Can't select hard disk off Windows 7 system image creator

    - by David
    When I try to create a system image in Windows 7 from the Control Panel (Control Panel\All Control Panel Items\Backup and Restore) I get the option to select a hard disk or a removeable disk to select, I have 2 disks and wanted to create the image on my spare one. However when I click refresh it doesn't show either of my disks but shows my CDROM under the removable disks area, anyone have this problem? Also, when I select a USB disk instead, it tries to iamge both my disks! I can't select my active Windows 7 installed disk! How pointless!

    Read the article

  • How to configure Windows 8.1 start screen search?

    - by AlexV
    I'm using Windows 8.1 and when you are in the "start screen" (where all the tiles are), if you type something, a search panel appears on the right side with search results. In 8.0 it searched within "start menu" shortcuts and control panel elements only (if I remember correctly). In 8.1 it now search "start menu" shortcuts, my library and files and no longer control panel items. Is there a way to configure this? I use this search solely to locate a "start menu" shortcut and/or control panel element, so I would like to narrow my search to only this.

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • Locking down a server for shared internet hosting.

    - by Wil
    Basically I control several servers and I only host either static websites or scripts which I have designed, so I trust them up to a point. However, I have a few customers who want to start using scripts such as Wordpress or many others - and they want full control over their account. I have started to do the basics - like on php.ini, I have locked it down and restricted commands such as proc, however, there is obviously a lot more I can do. right now, using NTFS permissions, I am trying to lock down the server by running Application Pools and individual sites in their own user, however I feel like I am hitting brick walls... (My old question on Server Fault). At the moment, the only route I can think of is either to implement an off the shelf control panel - which will be expensive and quite frankly, over the top, or look at the Microsoft guide - which is really for an entire infrastructure, not for someone who just wants to lock down a few servers. Does anyone have any guides that can put me on the correct path?

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • SOA Composite Sensors : Good Practice

    - by angelo.santagata
    I was discussing a interesting design problem with a colleague of mine Niall (his blog) on the topic of how to cancel an inflight SOA Composite process.  Obviously one way to do this is to cancel the process from enterprise Manager ( http://hostort/em ) , however we were thinking this isnt a “user friendly” way of doing this.. If you look at Nialls blog you’ll see he’s highlighted a number of different APIs which enable you the ability to manipulate the SCA instance, e.g. Code Snippet to purge (delete) an instance How to determine the instanceId from a composite_sensor_value using the “composite_sensor_value” table How to determine a BPEL Process status using the cube_instance table   Now all of these require that you know the instanceId of your SOA Composite, how does one find this out? Well the easiest way of doing this is to create a composite sensor on the SCA component. A composite sensor is simply a way of publishing a piece of business data as part of your composite. The magic here is that you can later query composites based on this value. So a good best practice is that for any composites you create consider publishing a composite sensor value using a primary key of some sort , e.g. orderId, that way if you need to manipulate/query composites you can easily look up the instanceId using the sensorid.   For information on how to create a composite Sensor id see this documentation link  

    Read the article

  • My cpus are powered down periodically

    - by mgiammarco
    I post here because I am using Ubuntu but this is probably an hardware problem. Since I bought my new setup with AMD Athlon(tm) II X4 635 Processor and asus m4a89td pro/usb3 motherboard with ecc ram I have stuttering on videos. I was using ubuntu 11.10 now ubuntu 12.10. Looking at syslog I have found that periodically (I notice only on videos but it happens always) this thing happens: Mar 6 23:36:42 virtual1 kernel: [28564.375548] smpboot: CPU 1 is now offline Mar 6 23:36:42 virtual1 kernel: [28564.380751] smpboot: CPU 2 is now offline Mar 6 23:36:42 virtual1 kernel: [28564.394947] smpboot: CPU 3 is now offline Mar 6 23:36:48 virtual1 kernel: [28569.917021] smpboot: Booting Node 0 Processor 1 APIC 0x1 Mar 6 23:36:48 virtual1 kernel: [28569.928015] LVT offset 0 assigned for vector 0xf9 Mar 6 23:36:48 virtual1 kernel: [28569.928372] [Firmware Bug]: cpu 1, try to use APIC500 (LVT offset 0) for vector 0x400, but the register is already in use for vector 0xf9 on another cpu Mar 6 23:36:48 virtual1 kernel: [28569.928378] perf: IBS APIC setup failed on cpu #1 Mar 6 23:36:48 virtual1 kernel: [28569.931305] process: Switch to broadcast mode on CPU1 Mar 6 23:36:48 virtual1 kernel: [28569.934255] smpboot: Booting Node 0 Processor 2 APIC 0x2 Mar 6 23:36:48 virtual1 kernel: [28569.945554] [Firmware Bug]: cpu 2, try to use APIC500 (LVT offset 0) for vector 0x400, but the register is already in use for vector 0xf9 on another cpu Mar 6 23:36:48 virtual1 kernel: [28569.945558] perf: IBS APIC setup failed on cpu #2 Mar 6 23:36:48 virtual1 kernel: [28569.948124] process: Switch to broadcast mode on CPU2 Mar 6 23:36:48 virtual1 kernel: [28569.949644] smpboot: Booting Node 0 Processor 3 APIC 0x3 Mar 6 23:36:48 virtual1 kernel: [28569.960838] [Firmware Bug]: cpu 3, try to use APIC500 (LVT offset 0) for vector 0x400, but the register is already in use for vector 0xf9 on another cpu Mar 6 23:36:48 virtual1 kernel: [28569.960840] perf: IBS APIC setup failed on cpu #3 Mar 6 23:36:48 virtual1 kernel: [28569.962953] process: Switch to broadcast mode on CPU3 I have: updated bios; tried all (really) bios options; changed ram; changed psu and cpu cooler; tried 3.8.1 kernel. What can I do now? Please help me! Thanks, Mario

    Read the article

  • Java Components Landing Page and Documentation Updates

    - by joni g.
    The new Java Components page provides access to the documentation for tools that are available for monitoring, managing, and testing Java applications. Documentation for the new versions of the following tools is available: JavaTest Harness 4.6. The JavaTest harness is a general purpose, fully-featured, flexible, and configurable test harness that is suited for most types of unit testing. See the JavaTest tab for documentation. SigTest 3.1. SigTest is a collection of tools that can be used to compare APIs and to measure the test coverage of an API. See the SigTest tab for documentation. The following tools are part of Oracle Java SE Advanced and Oracle Java SE Suite. Java Mission Control and Java Flight Control 5.4 are supported in JDK 8u20. Java Flight Recorder and Java Mission Control together create a complete tool chain to continuously collect low level and detailed runtime information enabling after-the-fact incident analysis. See the JMC tab for documentation. Advanced Management Console 1.0 is a new tool that is now available. AMC can be used to view information about the Java applets and Java Web Start applications running in your enterprise, and create deployment rules and rule sets to manage the execution of these applications. See the AMC tab for documentation. Usage Tracker tracks how Java Runtime Environments (JREs) are being used in your systems. See the Usage Tracker tab for documentation.

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

  • Video on Hyperion Tax Provision

    - by Lia Nowodworska - Oracle
    ( in via Jan) EPM Information Development has asked us to remind you about the new video available for Hyperion Tax Provision. You can view it on the OracleEPMWebcasts YouTube channel here: http://bit.ly/1jxLlCy An information rich 4:40 minutes of your time.  So please take a look. The video gives a brief overview of the main features of  Hyperion Tax Provision. You will learn ... That Tax Provision and reporting System builds on the Hyperion Financial Close Reporting Platform That much of the Tax Provision flow process is similar to the Financial Close Process and that the modules have been aligned to work together very closely. That HTP enables you to integrate Book- and Tax Reporting on a common platform. That It uses the technology of HFM, ties in with SmartView and can be used with Hyperion Financial Reporting. That the native integration between the Financial System and the Tax System creates transparency for the Tax Departments and removes bottlenecks in the tax process. More technical information can be found here: Oracle Hyperion Tax Provision Data Sheet Oracle Hyperion Tax Provision White Paper Oracle Hyperion Tax Provision Documentation If you have another 45 minutes to spare and want to get into greater detail, then you can check out the recording of an Advisor Webcast that we did earlier last year: You can find this via KM Doc Oracle Business Analytics Advisor Webcast Schedule and Archive Recordings (Doc ID 1456233.1) -> Select the Tab "Archived 2013" and it is the third from the top: "Oracle Hyperion Tax Provision - Features and Overview with Demo" If you have questions towards that Advisor Webcast, you may participate in the Community Discussion about it. (layout and post: Torben, authorized: Lia)

    Read the article

  • Identifying the version of ATI Catalyst drivers

    - by Snark
    I have bought a new video card based on the ATI Radeon HD 5670 chipset. I couldn't make it work with the latest ATI Catalyst drivers found on their website, only the drivers found on the CD delivered with the card worked. How can I know the version of Catalyst that is installed on my PC (running Windows 7 64-bits)? The ATI Catalyst Control Center returns the following information: Driver Packaging Version 8.673-091110a-092263C Provider ATI Technologies Inc. 2D Driver Version 8.01.01.973 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/CLASS/{4D36E968-E325-11CE-BFC1-08002BE10318}/0000 Direct3D Version 8.14.10.0708 OpenGL Version 6.14.10.9120 Catalyst™ Control Center Version 2009.1110.2225.40230 I do not recognize anything pointing to a "marketing version". The website says the current version of Catalyst is 10.2.

    Read the article

  • Java Magazine: Developer Tools and More

    - by Tori Wieldt
    The May/June issue of Java Magazine explores the tools and techniques that can help you bring your ideas to fruition and make you more productive. In “Seven Open Source Tools for Java Deployment,” Bruno Souza and Edson Yanaga present a set of tools that you can use now to drastically improve the deployment process on projects big or small—enabling you and your team to focus on building better and more-innovative software in a less stressful environment. We explore the future of application development tools at Oracle in our interview with Oracle’s Chris Tonas, who discusses plans for NetBeans IDE 9, Oracle’s support for Eclipse, and key trends in the software development space. For more on NetBeans IDE, don’t miss “Quick and Easy Conversion to Java SE 8 with NetBeans IDE 8” and “Build with NetBeans IDE, Deploy to Oracle Java Cloud Service.” We also give you insight into Scrum, an iterative and incremental agile process, with a tour of a development team’s Scrum sprint. Find out if Scrum will work for your team. Other article topics include mastering binaries in Maven-based projects, creating sophisticated applications with HTML5 and JSF, and learning to program with BlueJ. At the end of the day, tools don’t make great code—you do. What tools are vital to your development process? How are you innovating today? Let us know. Send a tweet to @oraclejavamag. The next big thing is always just around the corner—maybe it’s even an idea that’s percolating in *your* brain. Get started today with this issue of Java Magazine. Java Magazine is a FREE, bi-monthly, online publication. It includes technical articles on the Java language and platform; Java innovations and innovators; JUG and JCP news; Java events; links to online Java communities; and videos and multimedia demos. Subscriptions are free, registration required.

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • What causes random clicking/navigation sound while PC is idle?

    - by user14339
    On my Dell D630 laptop running Windows XP Pro SP3, I randomly hear the "Windows Navigation Start.wav" played. This happens even if the only application active on the PC at the time is Microsoft Outlook and no one is touching the keyboard or touchpad. The start navigation sound IS enabled for Internet Explorer (I have IE 7 installed), but it is not running (no window open, anyway). The clicks are rapid, as if being executed automatically by a script or process, and they always come in threes: "click-click, click". I don't see any changes on the display and I don't know how to figure out what's running that could cause this. I have tried to leave Windows Task Manager open in the hope that I'll see a process name jump to the top of the list briefly, but it doesn't happen that often and I guess I'm not quick enough. The PC is slower than I think it should be, but System Idle Process is at 96 - 99 during the event, and for most of the time when I'm not actually using the machine. Any ideas?

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • Server not responding to SSH and HTTP but ping works

    - by yes123
    Hello guys, I requested an hard reboot because none of ssh and http worked. Ping worked normally. Which logs should i check to understand what was the problem? Thanks! (debian 6 on lamp) Edit: my memory and swap: Mem: 4040068k total, 1114920k used, 2925148k free, 109212k buffers Swap: 1051384k total, 0k used, 1051384k free, 283820k cached 4 GB ram (and more than 1TB of HDD) The cause is from 2 days ago: look how the usage of swap goes +60% in less than 10hours My control panel reports this as top 5 memory usage process: If every apache2 process is 190MB large that sux because IF i do TOP i have 262 sleeping process most of them are apache2! My apache mpm_prefork settings are: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 ServerLimit 1500 MaxClients 1500 MaxRequestsPerChild 2000 </IfModule> KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4

    Read the article

  • Squid stale-while-revalidate not working when max-age=0

    - by Wiliam
    Squid 2.7 always reaches backend, expected is to reach backend using stale-while-revalidate only when cache expires, not when client triggers max-age=0. Script: <?php header('Cache-Control: public, max-age=10, stale-if-error=200, stale-while-revalidate=500'); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); sleep(2); die("OK"); And squid config: # http_port public_ip:port accel defaultsite= default hostname, if not provided http_port 80 accel defaultsite=mydomain.com # IP and port of your main application server (or multiple) cache_peer 127.0.0.1 parent 8000 0 no-query allow-miss originserver name=main # Do not tell the world that which squid version we're running httpd_suppress_version_string on # Remove the Caching Control header for upstream servers header_access Cache-Control deny all #header_access Last-Modified deny all # log all incoming traffic in Apache format logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh access_log /usr/local/squid/var/logs/squid.log combined all cache_effective_user squid refresh_pattern . 10080 90% 999999 ignore-no-cache override-expire ignore-private icp_port 0

    Read the article

  • Importing an existing project into Git

    - by Andy
    Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV.

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >