Search Results

Search found 4938 results on 198 pages for 'unix timestamp'.

Page 74/198 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Java Web Server with Jetty - TCP Connections Taking Long

    - by daysleeper
    I have an application with fairly high traffic (20K req/min) running on the JVM with a Jetty servlet container on Ubuntu. Below is my Jetty configuration: 10 20 2000 2 When I analyze the network traffic, I realize that sometimes it is taking long to establish TCP connections on the port that Jetty is running. The long connections are varying between 3.0s and 9.0s. The port is configured to accept MAX number of TCP connections. Do you know what might be causing the delay in accepting connections? Thanks

    Read the article

  • Symlink in windows XP

    - by willson
    Hi, there. The question is how to make the similar thing like symlink in windows like in *nix. It's really hard to write whole path to the file in console (even using [tab], it's not the way if you need to change language). Adding everything in PATH is tiring too. It'll be great to make a symlink running one command. Actually I'm looking for console app.

    Read the article

  • The shell dotfile cookbook

    - by Jason Baker
    I constantly hear from other people about how much of the stuff they've used to customize their *nix setup they've shamelessly stolen from other people. So in that spirit, I'd like to start a place to share that stuff here on SO. Here are the rules: DON'T POST YOUR ENTIRE DOTFILE. Instead, just show us the cool stuff. One recipe per answer You may, however, post multiple versions of your recipe in the same answer. For example, you may post a version that works for bash, a version that works for zsh, and a version that works for csh in the same answer. State what shells you know your recipe will work with in the answer. Let's build this cookbook as a team. If you find out that an answer works with other shells other than the one the author posted, edit it in. If you like an idea and rewrite it to work with another shell, edit the modified version in to the original post. Give credit where credit is due. If you got your idea from someone else, give them credit if possible. And for those of you (justifiably) asking "Why do we need another one of these threads?": Most of what I've seen is along the lines of "post your entire dotfile." Personally, I don't want to try to parse through a person's entire dotfile to figure out what I want. I just want to know about all the cool parts of it. It's helpful to have a single dotfile thread. I think most of the stuff that works in bash will work in zsh and it may be adapted to work with csh fairly easily.

    Read the article

  • find: What's up with basename and dirname?

    - by temp2290
    I'm using find for a task and I noticed that when I do something like this: find `pwd` -name "file.ext" -exec echo $(dirname {}) \; it will give you dots only for each match. When you s/dirname/basename in that command you get the full pathnames. Am I screwing something up here or is this expected behavior? I'm used to basename giving you the name of the file (in this case "file.ext") and dirname giving you the rest of the path.

    Read the article

  • Notify via email if something wrong got happened in the shell script

    - by Nevzz03
    fileexist=0 for i in $( ls /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done); do mv /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done /data/read-only/clv/daily/archieve-wip/ fileexist=1 done --some other script below Above is the shell script I have in which in the for loop, I am moving some files. I want to notify myself via email if something wrong got happened in the moving process, as I am running this script on the Hadoop Cluster, so it might be possible that cluster went down while this was running etc etc. So how can I have better error handling mechanism in this shell script? Any thoughts?

    Read the article

  • What the best multi-thread application debugger for C++ apps.

    - by Coredumped
    I'm looking for a good multi-thread-aware debugger, capable of showing performance charts of application threads on Linux, don't know if such a thing exists, perhaps as a Eclipse plugin. The idea would be to track per thread memory allocation a CPU usage as well as being able to interrupt a thread and examine its stack trace, local vars, etc. It does not have to be an eclipse plugin or a free tool, do any of you have heard of something similar?

    Read the article

  • Is this safe? Is this OK to do in MYSQL?

    - by alex
    I have always done this: mysqldump -hlocalhost -uuser -ppass MYDATABASE > /home/f/db_backup/MYDATABASE.sql mysql -uuser -ppass MYDATABASE < MYDATABASE.sql But, if I do this instead...is this safe? Is this identical to the above??? mysqldump -hlocalhost -uuser -ppass MYDATABASE | gzip > /home/f/db_backup/MYDATABASE.sql.gz zcat MYDATABASE.sql.gz | mysql -uuser -ppass MYDATABASE

    Read the article

  • Using BASH - Find CSS block or definition and print to screen

    - by Brian
    I have a number of .css files spread across some directories. I need to find those .css files, read them and if they contain a particular class definition, print it to the screen. For example, im looking for ".ExampleClass" and it exists in /includes/css/MyStyle.css, i would want the shell command to print .ExampleClass { color: #ff0000; }

    Read the article

  • How to manipulate a string, variable in shell

    - by user558134
    Hei everyone! I have this variable in shell containing paths separated by a space: LINE="/path/to/manipulate1 /path/to/manipulate2" I want to add additional path string in the beginning of the string and as well right after the space so that the variable will have the result something like this: LINE="/additional/path1/to/path/to/manipulate1 additional/path2/to/path/to/manipulate2" Any help appreciated Thanks in advance

    Read the article

  • Text substitution (reading from file and saving to the same file) on linux with sed...

    - by Roger
    I want to read the file "teste", make some "find&replace" and overwrite "teste" with the results. The closer i got till now is: $cat teste I have to find something This is hard to find... Find it wright now! $sed -n 's/find/replace/w teste1' teste $cat teste1 I have to replace something This is hard to replace... If I try to save to the same file like this: $sed -n 's/find/replace/w teste' teste or: $sed -n 's/find/replace/' teste > teste The result will be a blank file... I know I am missing something very stupid but any help will be welcome. UPDATE: Based on the tips given by the folks and this link: http://idolinux.blogspot.com/2008/08/sed-in-place-edit.html here's my updated code: sed -i -e 's/find/replace/g' teste

    Read the article

  • what's the use of 0 in wait system call?

    - by Supereme
    Hi, The syntax for the wait system call is pid= wait(&var) where pid is the process id of child and var is the variable which will contain the reason for exiting child. But what happens when we use wait((int *)0)? What does it exactly mean? Thank you.

    Read the article

  • test remote file if directory

    - by soField
    HOSTNAME=$1 #missing files will be created by chk_dir for i in `cat filesordirectorieslist_of_remoteserver` do isdir=remsh $HOSTNAME "if [ -d $i ]; then echo dir; else echo file; fi" if [ $isdir -eq "dir" ] then remsh $HOSTNAME "ls -d $i | cpio -o" | cpio -id else remsh $HOSTNAME "ls | cpio -o" | cpio -id fi done i need simple solution for checking remote file is directory or file ? thanks

    Read the article

  • How can I map UIDs to user names using Perl library functions?

    - by Mike
    I'm looking for a way of mapping a uid (unique number representing a system user) to a user name using Perl. Please don't suggest greping /etc/passwd :) Edit As a clarification, I wasn't looking for a solution that involved reading /etc/passwd explicitly. I realize that under the hood any solution would end up doing this, but I was searching for a library function to do it for me.

    Read the article

  • Best way to install web applications (e.g. Jira) on Unixes?

    - by gineer
    Can you throw some points on how it is a best way, best practice to install web application on Unixes? Like: where to place app and its bases and so for, how to configure to be secure and easy to backup, etc For example I know such suggestion -- to set uniq user for each app. App in question is Jira on FreeBSD, but more general suggestions are also welcomed.

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • What is the point of having a key_t if what will be the key to access shared memory is the return value of shmget()?

    - by devoured elysium
    When using shared memory, why should we care about creating a key key_t ftok(const char *path, int id); in the following bit of code? key_t key; int shmid; key = ftok("/home/beej/somefile3", 'R'); shmid = shmget(key, 1024, 0644 | IPC_CREAT); From what I've come to understand, what is needed to access a given shared memory is the shmid, not the key. Or am I wrong? If what we need is the shmid, what is the point in not just creating a random key every time? Edit @link text one can read: What about this key nonsense? How do we create one? Well, since the type key_t is actually just a long, you can use any number you want. But what if you hard-code the number and some other unrelated program hardcodes the same number but wants another queue? The solution is to use the ftok() function which generates a key from two arguments. Reading this, it gives me the impression that what one needs to attach to a shared-memory block is the key. But this isn't true, is it? Thanks

    Read the article

  • malloc in kernel

    - by yoavstr
    when i try to malloc at kernel mod i get screamed by the compiler : res=(ListNode*)malloc(sizeof(ListNode)); and the compiler is screaming : /root/ex3/ex3mod.c:491: error: implicit declaration of function ‘malloc’ what should i do ?

    Read the article

  • create backup file descriptor?

    - by BobTurbo
    stdinBackup = 4; dup2(0, stdinBackup); Currently I am doing the above to 'backup' stdin so that it can be restored from backup later after it has been redirected somewhere else. I have a feeling that I am doing a lot wrong? (eg arbitrarily assigning 4 is surely not right). Anyone point me in the right direction?

    Read the article

  • after dup2, stream still contains old contents?

    - by BobTurbo
    so if I do: dup2(0, backup); // backup stdin dup2(somefile, 0); // somefile has four lines of content fgets(...stdin); // consume one line fgets(....stdin); // consume two lines dup2(backup, 0); // switch stdin back to keyboard I am finding at this point.. stdin still contains the two lines I haven't consumed. Why is that? Because there is just one buffer no matter how many times you redirect? How do I get rid of the two lines left but still remember where I was in the somefile stream when I want to go back to it?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >