Search Results

Search found 6189 results on 248 pages for 'garbage collection'.

Page 138/248 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Low-cost, Flexible Log Aggregation [closed]

    - by Dan McClain
    I'm starting to have quite the collection of Ubuntu VMs that I must manage. I'm starting to investigate Puppet for managing the configuration of all of them, and apticron to let me know what's out of date. But the issue I feel I should deal with sooner than later is log aggregation. I'd like to stay in the free/open source realm for now, seeing that we don't have much budget for something like splunk yet. In addition to syslog, I would like to collect application specific logs (We are running different apps on different machines, from nginx+passenger for rails, to Apache+Tomcat for java, to PHP for expression engine, and mysql/postgresql database server), so that we can analyze the relavent data. For now, I'm just looking to get all the logs one place.

    Read the article

  • Software for monitoring internal software?

    - by Tyler Eaves
    Is there any good software for monitoring the health of a collection of related software? Requirements are as follows: Web-based, deployable on standard Linux/BSD software. Configurable to support a variety of processes, scheduled at various intervals. Some sort of dashboard interface, for monitoring status, viewing errors, etc. As an example, suppose we have a daily export that's scheduled to run at 6AM each morning. After the export completes, it would POST a status message, saying it had completed, passing in some sort of application key to identify the export. If that status message hadn't come in, by, say, 6:30AM, an e-mail might be sent, that application should go red on the dashboard, etc. Applications should also be able to post error/warning messages. Basically the goal is to be able to monitor all of our internal projects from one system, rather than a multitude of e-mails, log files, etc. I suspect that I'll probably have to end up writing this from scratch, but I just thought I'd ask.

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • Add subtitles to Ogg Video

    - by Jaxau
    I've started to convert my DVD collection to Ogg Video (OGV) and it works so far quite good. However, I heard that I can embed the subtitles inside the OGV-file as well. How can I do this on Linux? On Windows there are several applications but on Linux not so much. Any examples would be appreciated. Edit: I have the original VOB of course. PS. No, I don't give out my movies. Don't even ask. I paid for them, not you. :)

    Read the article

  • Apache 2.4 Prefork vs. PHP-FPM Event shows sig decrease in requests per second

    - by Mark
    On my Apache 2.4.2 server with a standard mod_php Prefork setup these are my server-status results Current Time: Wednesday, 24-Oct-2012 19:36:24 CDT Restart Time: Wednesday, 24-Oct-2012 01:27:30 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 8 minutes 54 seconds Total accesses: 14304233 - Total Traffic: 342.3 GB CPU Usage: u12584.6 s721.93 cu.66 cs3.43 - 20.4% CPU load 219 requests/sec - 5.4 MB/second - 25.1 kB/request 507 requests currently being processed, 355 idle workers ______KKKKR_K______W_KKC___CKK_K_K_W__CC_KKK_KK._K_K_KK._KKKK_K_ K_____KK_KKKK_K_KK__K___KK_K___K_____CKKK_WK_K_____KCKK__K___K_K K_CK_K_K_____K__KKKK_K__K___K_KK_K_K_KKKCK____________KK_CK__KKK __C_KKKKKKK___CK___C_KKK_K__C__K_CK____KKK__K__K__K_K__KK_CK_K__ _KKKKK_K_W__KK______K___K__W___C_K__K____KKKKKKKK.KKKKKKKCK_K___ _C_KK_K_WK__K_KK__K__RK_KK___K____K_KK_K_K___RKC_KKKK___KKKC_K_W _C_KK_KK__W____KC__KKK__KKK___K___KKK_KK_K_KKW__K_KR_KK_KK__KKK_ R__KKK__KKKKKK__K_KKKKK_K__K_K___KKW_________KK_K___KKK___KK.K_C KKKKKKW_____K__K_KKC_KCKK_K_KK_K__KK__K___K__KK_KK__________KK__ __K___KK_K__K_C_KK_K___KK__KK__K__KCK_K__KK_________K_K_KK__.K__ K_CKK.CCRW__KKKKKKKKKKKC__W____K___KWK_KK_KKC______.K_K_KK_KKKC_ __KKK_W_KCKKK_K_K____CCCK__KC_KKKK_K____K_CK_K____K__K____KKK_KK KK___K_K_K__KW__KCKKKK____WKWK__K_KKRKK__C_K_KK_KK_K__KKCC_K__C_ KK_K___K_KK______K_____CKK_K_______KK_CKCK__KKKKK____K__K..K____ __KKWK_KW__KKK__K_KKK___K_KK_KKK__KK___KK___KK_KK___KK____KKWKKC KK_KKKK_................................` When I switch to a PHP-FPM setup with the Event MPM with no other variables changes, my requests/sec plummet and overall apache response is garbage. Current Time: Wednesday, 24-Oct-2012 19:51:21 CDT Restart Time: Wednesday, 24-Oct-2012 19:48:03 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 3 minutes 18 seconds Total accesses: 18720 - Total Traffic: 307.1 MB CPU Usage: u16.57 s4.74 cu0 cs0 - 10.8% CPU load 94.5 requests/sec - 1.6 MB/second - 16.8 kB/request 15 requests currently being processed, 49 idle workers PID Connections Threads Async connections total accepting busy idle writing keep-alive closing 11701 114 no 10 22 0 66 38 11702 134 no 5 27 0 81 48 Sum 248 15 49 0 147 86 __R_R__W___RRW________RR__R___W_W_______W_____W_____________R_R_ Is there any obvious reason anyone could think of why this would be the case. I can provide any other additional stats or server setup info to help out. Ive tried tweaking everything up and down and nothing really helps get the PHP-FPM setup anywhere near a baseic prefork/mod-php setup. Thanks!

    Read the article

  • Automatically Snapshoting AWS instances (or other back up strategy)

    - by user1172468
    I just realized that my aws instance count has risen into the double digits. I'm currently backing portions of my folders and dbs and moving them off to a backup instance. What I think I should be doing is taking a snapshot of the instances (automatically) and persisting them on S3 so I have a running 7 day collection of daily backups. There is a question asking the same thing here, however the answers don't go into depth. So the closest answer seems to be: use a cron job to snapshot the instance. So do I run the cron job on the instance itself? or do I have a micro instance to run these snapshots? Could I get an example script or the command for say a linux flavor? what software must I have installed to get this to run? Thanks.

    Read the article

  • Deleted the .AppleDouble files inside my Time Machine backups - are they still OK?

    - by Jon M
    My Ubuntu server is set up to emulate a TimeCapsule (after a very long weekend following the instructions here, here and here). My macbook pro has been backing up happily to it for a month or so now, and all seems well. The other day I was tidying up the extraneous files from my music collection on the server, got a bit loose with the find command... and ended up deleting all the .AppleDouble files underneath '/', which included the Time Machine folder. Now, Time Machine still appears to work fine, it backs up regularly, I can look through all the previous versions of my files, and they seem to restore without trouble. My question is: by deleting the .AppleDouble files, have I actually broken anything? Is the TM data still good, or should I trash it and start fresh (i.e. with a new 'day 0' full backup)?

    Read the article

  • Application Compatibility Clients do not show in MSSQL database, but do show in \AppCompat\

    - by rjt
    Application Compatibility Clients are not denied access to the central MSSQL database, but are able to leave their own files in the \AppCompat\ share. The only computer that shows up in the "Microsoft Application Compatibility Manager" database is the the machine i initially created the .MSI installer from. The MSI successfullly pushed out via GPO and like i said there are tons of file in the \AppCompat\ share from many different computers. But only 1 pc shows up in the "Data Collection Manager" database, so i only have data from one machine. i could manually add all these machines (ADNETBIOSNAME\MACHINENAME221$) to the MSSQL AppCompat db permissions list or use an SQL command to do so in batch, but i suspect i must have missed something. Do you manually edit the MSI to set the credentials?

    Read the article

  • Music player alternative to windows Media Player

    - by patjbs
    I find the UI and general usability of the prepackaged windows Media Player to be cludgy and at times incomprehensible. I used to use programs like WinAmp and Sonique, many years ago. What are some current alternatives to the windows Media Player? I don't have iTunes - I'm pretty happy with ripping my own audio files from my album collection. I'm specifically looking for the follow qualities/abilities: Easy sorting and manipulation of the media library Easy tagging of media - marking favorites and ratings, etc. Preferably lightweight. Something like a Picasa for audio media.

    Read the article

  • Using Windows 7 drivers in Windows Server 2008 R2

    - by Robert Koritnik
    Maybe I should post this to http://www.serverfault.com. Windows 7 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 doesn't event though it's practically the same OS when it comes to drivers. But I know that this has a very good reason. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are normally desktop machines and would need Windows 7 driver spectrum... Question I know I've been reading about some trick on the internet that first installed Windows 7 on the machine, than do something to get either all Windows 7 driver collection or just those installed, and then install Windows Server 2008 and use those drivers. The thing is: I can't seem to find these instructions on the internet any more. If anybody knows where these are please provide the link for the rest of us.

    Read the article

  • Create mix CDs from MP3 files

    - by Dave Jarvis
    How would you write a script (preferably for the Windows commandline) that: Examines thousands of MP3 files stored on a single drive (e.g., G:\) Randomizes the collection Populates a series of directories up to 650MB worth of songs (without exceeding 650MB) Every song is shucked exactly once (Optional) The directory size comes as close as possible to 650MB The DIR, COPY, and XCOPY commands have no explicit file size switches. A few Google searches have come up with: File size condition in DOS Cygwin and UWIN DOS File sizes It would be ideal if UNIX-like environments can be avoided. My question, then: How do you compare file (or directory) sizes using the Windows commandline?

    Read the article

  • Failed to Upload a File

    - by CrazyNick
    User 'X' is the site-collection owner. He tries to upload a 500kb file into a document library, got the error "The server has aborted your upload. The files selected may exceed the server's upload size limit. If you are transfering a large group of files, try uploading fewer at a time." however web-application owners are able to upload the file. what would be the issue, any thoughts? Upload size limit for a file – 5 MB Site Quota template set – 50 MB Used Site Quota – 10 MB

    Read the article

  • Should websites live in /var/ or /usr/ according to recommended usage?

    - by nbolton
    According to a guide on the Linux directory structure, /usr/ is for application files, and /var/ is for files that change (I assume this means "files that belong to the applications"). Is this correct? If this is the case then I'm a little torn between using either. A website is an application (if it's dynamic, so to speak), but in other cases it is just a collection of files used by Apache. The default www dir lives in /var/www/, so should we follow suit by using /var/websites/ (or something similar), or choose /usr/websites/ since they could be applications? This is a very trivial question, but it's bugging me nonetheless. For our case, I'm leaning toward /usr/web or something like that, since our websites are all applications. Update: This is for our company websites; it's not a shared hosting server, so we don't need to worry about separating them in /home/ or anything like that.

    Read the article

  • How to restore default iPod playlists on Amarok?

    - by obvio171
    I wanted to "reset" the collection on my iPod and ended up accidentally deleting, through Amarok, all the playlists, including the default ones like "Most Played" and "Highest Rated". Since these are dynamic playlists with a special meaning for iPod, I don't think creating new, normal playlists with the same name will bring their special behavior back. How do I restore them with the same dynamic functionality? Is there a way to do that on Amarok? Rhythmbox? GTKPod? Command line? P.S.: not entirely sure what the policy about iPod questions are, but this one in particular seems to me to be very computer-related because, although it's about interfacing with a device, everything has to be done on my computer, using standard PC libraries/programs, etc. If it's still off-topic, please point me to where I could post it.

    Read the article

  • Merge only a one remote branch into a local branch with Mercurial

    - by Pepijn
    I wan to manage some profiles as XML files in Mercurial repos. The setup I'm thinking of: Each user has a repo with a branch where he manages his own profile, and a number of branches where he can pull and merge other profiles from that branch of another user. So for example I have my own profile branch and a branch labeled friends, in which I want to pull the profile branches of a few remote repos, to collect like a collection of profiles. I figured out that since the repos are unrelated I need to use -f, but I can't figure out how to pull and merge only a single branch into another. So I want like me friend someone profile ---> friends <--- profile \-> family friends <--- profile Is this even possible? Should I use separate repos instead? Is there a better solution?

    Read the article

  • mongoexport csv output array values

    - by 9point6
    I'm using mongoexport to export some collections into CSV files, however when I try to target fields which are members of an array I cannot get it to export correctly. command I'm using: mongoexport -d db -c collection -fieldFile fields.txt --csv > out.csv and the contents of fields.txt is similar to id name address[0].line1 address[0].line2 address[0].city address[0].country address[0].postcode where the BSON data would be: { "id": 1, "name": "example", "address": [ { "line1": "flat 123", "line2": "123 Fake St.", "city": "London", "country": "England", "postcode": "N1 1AA" } ] } what is the correct syntax for exporting the contents of an array?

    Read the article

  • How can I rip a DVD but merge/join episodes

    - by Nifle
    Some background. I have the Pink Panther Collection, they have about 30 episodes on each DVD. Now I want to watch this on my $MobileDevice. So I went and converted it to m4v or avi. This of course went splendid with both Handbrake and AutoGK but the problem is that I want ONE file per DVD, both Handbrake and AutoGK creates one file per episode. So here finally is my question. Does anyone know how to persuade Handbrake or AutoGK to create one video file with all the episodes? Or can anyone recommend another (free/cheep) tool for the job? Oh and no cheating by telling me to join the files after conversion. I have never found a video joiner that did not disappoint me (usually bad audio sync).

    Read the article

  • Nginx try_files or else continue matching against locations?

    - by Yang
    I'm wondering whether this is possible with Nginx: I just added a directory with a bunch of HTML files (foo.html, bar.html) that I'd like to serve with /foo, /bar, etc. If the URL doesn't match up with a file name I'd like to fall back to whatever the next best matching location would be. So I have: # This block is newly added. location ~ ^/([^/]+)$ { default_type text/html; alias /blah/$1.html; } # Our long list of existing subsystems below.... location /subscribe { proxy_pass http://127.0.0.1:5000; } location /upload { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; } location ~ /(data|garbage|blargh).* { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; auth_basic text; auth_basic_user_file /etc/nginx/htpasswd; } .... The problem is that the first regex now eats up the URLs that would've gone to other locations, as per the documented behavior of location. One approach is to maintain the full explicit list of files in the first location block, but this list is quite large and is always changing. Is there a way to check to see if the file exists first, and if not, then continue with what would've been the next-best location match? I took stabs using try_files (including using a @fallback and nesting locations in there) but I don't think it's capable of doing this. However I thought I'd ask here in case I'm missing something. (Or maybe there's another better approach altogether.)

    Read the article

  • tag based file organizer

    - by Richie
    I'm finishing professional school and over the years have acquired a pile of notes and articles that I'd like to hang onto. I'd like to add to them and create sort of an archive of article and files that may be useful down the road. I'd also like to organize this collection of files not only by simple grouping but also with tags. I feel like that will make searching through them years later much easier. Suggestions on software that would be good for this? Just a general file manager, something that uses tags that can be attached to each file? Thanks

    Read the article

  • Struts 2- problem with s:action tag

    - by Raghave
    Here is a small test application that does following things ask user to enter his name and submit - (index.jsp) as a result of index.jsp is the welcome.jsp page that asks user to select his/her blood group index.jsp <%@ page language="java" import="java.util.*" pageEncoding="ISO-8859-1"%> <%@ taglib prefix="s" uri="/struts-tags" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> </head> <body> <form action="MyName"> <s:textfield name="UserName" label="Enter Your Name"/> <s:submit/> </form><br> </body> </html> struts.xml <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.1//EN" "http://struts.apache.org/dtds/struts-2.1.dtd"> <struts> <package name="module1" namespace="" extends="struts-default"> <action name="MyName" class="module1.User"> <result>/Welcome.jsp</result> </action> <action name="Blood_Group" class="module1.SelectBloodGroup" method="bloodGroupList"/> </package> </struts> SelectBloodGroup.java package module1; import java.util.ArrayList; import com.opensymphony.xwork2.ActionSupport; public class SelectBloodGroup extends ActionSupport{ private ArrayList<BloodGroup> bglist; public String bloodGroupList(){ bglist = new ArrayList<BloodGroup>(); bglist.add(new BloodGroup("1","A+")); bglist.add(new BloodGroup("2","B+")); bglist.add(new BloodGroup("3","AB+")); bglist.add(new BloodGroup("4","O+")); bglist.add(new BloodGroup("5","A-")); bglist.add(new BloodGroup("6","B-")); bglist.add(new BloodGroup("7","AB-")); bglist.add(new BloodGroup("8","O-")); return "SUCCESS"; } public ArrayList<BloodGroup> getBglist(){ return bglist; } } class BloodGroup{ private String id; private String bg; BloodGroup(String id,String bg){ this.id=id; this.bg=bg; } } welcome.jsp <%@ page language="java" import="java.util.*" pageEncoding="ISO-8859-1"%> <%@ taglib prefix="s" uri="/struts-tags" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> </head> <body> <s:action name="Blood_Group" executeResult="false"/> //***************here is the problem*************** <s:select list="bglist" listKey="id" listValue="bg"/> //*********************************************** </body> </html> Struts is unable to identify bglist as a collection or Array or List or iterator. WHAT SHOULD I ASSIGN TO list ATTRIBUTE IN THE s:select TAG IN THE FILE welcome.jsp What is wrong with the code please tell me in detail. If you could send me the correted version. WHY IS THE s:action Tag not working ? This is the error i am getting Apr 13, 2010 1:49:19 PM org.apache.catalina.core.ApplicationDispatcher invoke SEVERE: Servlet.service() for servlet jsp threw exception tag 'select', field 'list': The requested list key 'bglist' could not be resolved as a collection/array/map/enumeration/iterator type. Example: people or people.{name} - [unknown location]

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • Defragment / Performance Monitor without Task Scheduler

    - by mjaggard
    My organisation has a policy of disabling Task Scheduler on all servers and workstations (don't ask, I tried once to wrestle the pig). I need to collect performance stats using Data Collector Sets in Windows 7 or Windows 2008 but the Performance Monitor interface requires Task Scheduler to be running. Is this possible because I'm not trying to schedule anything (except the collection of WMI information every 15 seconds but I doubt it hands that task off to the task scheduler)? Is there any way to trick it into thinking Task Scheduler is running? If not, is there any way to temporarily override the group policy to allow Task Scheduler to run? I've found that most group policy can be overridden in this way by an Administrator by editing the registry. On exactly the same vein, I want to defragment a hard disk on one of my workstations, but I can't get it to start because of the dependancy on Task Scheduler - is it possible to overcome this?

    Read the article

  • pdf creation software, for docx and odt

    - by oxinabox.ucc.asn.au
    Ok, I have a fairly large collection of docx and odt files. Minutes from meetings etc. Now I want to convert them to pdfs for distrobution. and also into one combined pdf. At the momement I'm using Adobe Acrobat 8 (Pro iirc). and on another machine I'm using foxit pdf printer. To do this I have to print them each individually to pdfs. and then I can combine them with Acrobat, cos acrobat doesn't support conversion stright from docx or odt to pdf - only via printing. Now this is annoying if you have to do it on a regular basis, since i don't keep the pdfs around (I have the orignals source controlled :-D) cos they go out of date pretty quick as I often have to go back and modify old versions (like ridiculously often). e Eg When I find out I've got something in the minutes wrong or I want to add more context for clarifaction. Anyone got a better solution?

    Read the article

  • Best way to rip DVD movies to ISO files

    - by alex
    I'm trying to backup my DVD collection. I have Handbrake, and will eventually experiment with the best settings to use. For now, I'd like to backup the DVD's to ISO files, that i can mount and then use Handbrake on later, or burn back on to DVD should the original get damaged. I have a WD TV box that is capable of playing ISO files also. What's the best program for doing this? I'm not so much concerned with file size.

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >