Search Results

Search found 36619 results on 1465 pages for 'damn small linux'.

Page 488/1465 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • what TERM to use to get rid of color escape codes?

    - by slivu
    Is there a way to get rid of escape codes in terminal output? Say even if the script are sending that codes they are ignored by terminal and text displayed as is, without colors, bolds etc. I need to display terminal output on a HTML page. For now i'm using javascript to remove escape codes, but it becomes clunky cause i receive output by chars, and have to wait until all content received then update it, leading in weird effects.

    Read the article

  • ssh (openSSH) questions

    - by Camran
    I have ubuntu 9.10 server. Firstly, is OpenSSH the same as SSHD? Secondly, In the terminal when typing whereis sshd i get this: whereis sshd /usr/sbin/sshd Also when typing whereis openssh i get this: whereis openssh /usr/lib/openssh How do I know if I have openssh? Also, some tutorials online suggest opening sshd_config, so when typing this: whereis sshd_config /usr/share/man/man5/sshd_config.5.gz // I get this... What should I do, because as you have answered my other Q about security, you have pointed out that it is the way you configure your ssh and etc which is important. Is there any guide for this? How should I configure this? I will be the only user for this server btw... If you need more input let me know and I will update this Q. Thanks

    Read the article

  • ssh (openSSH) questions

    - by Camran
    I have ubuntu 9.10 server. Firstly, is OpenSSH the same as SSHD? Secondly, In the terminal when typing whereis sshd i get this: whereis sshd /usr/sbin/sshd Also when typing whereis openssh i get this: whereis openssh /usr/lib/openssh How do I know if I have openssh? Also, some tutorials online suggest opening sshd_config, so when typing this: whereis sshd_config /usr/share/man/man5/sshd_config.5.gz // I get this... What should I do, because as you have answered my other Q about security, you have pointed out that it is the way you configure your ssh and etc which is important. Is there any guide for this? How should I configure this? I will be the only user for this server btw... If you need more input let me know and I will update this Q. Thanks

    Read the article

  • Editing java.security - says it is [read only] (Fedora)

    - by jax
    I am trying to edit the file java.security but it opens as read only. I am running as root user but I think this is happening because the java process is currently using the file. How can I temporarily close the process and edit the file and then start java up again? I am using Fedora.

    Read the article

  • Will modules installed by insmod command persist after rebooting?

    - by apache
    There is how the book I'm reading describe the insmod utility: The program loads the module code and data into the kernel, which, in turn, performs a function similar to that of ld, in that it links any unresolved symbol in the module to the symbol table of the kernel. Unlike the linker, however, the kernel doesn’t modify the module’s disk file, but rather an in-memory copy. It looks like it won't persist since it's in-memory, but I'm not sure.

    Read the article

  • Slow data transfer using SSH

    - by Floste
    The server is an ubuntu server 11.04 with sshd. SSH works fine for console programs. But data transfer is slow, which is very annoying when transferring large files. I tried two different client programs and changed the port, but the speed is always the same. I know the server can transfer data a lot faster over SSL, which afaik uses AES. I configured my SSH client to use AES, too, but no effect. Why is using SSH multiple times slower than SSL and is there a way to improve transfer speed of SSH?

    Read the article

  • The amount of memory used by each process

    - by tuxsmouf
    I have a mysql server running debian with 2GO of RAM. I would like to know the amount of memory used by each process. I thought ps -aux was the command and options for it. But I only see 90 MO used by several processes and free -m tells me that 1400 MO are used. Is there a way to have a better view with the processes and the memory used by them ?

    Read the article

  • command line LVM issue on CentOS 5

    - by alex-M
    I am able to create from using lvm GUI to do as follows: /dev/var-v0l/var /var ext3 defaults 1 2 /dev/varopt-vol/var-opt /var/opt ext3 defaults 1 2 $df /dev/mapper/var-v0l 103208224 1881092 96084460 2% /var /dev/mapper/varopt-vol 103208224 192252 97773300 1% /var/opt but using command line LVM I created I can not do as the above $ df df: `/var/opt': No such file or directory /dev/mapper/var-v0l 103208224 1881092 96084460 2% /var What am I missing.

    Read the article

  • How to change default user (ubuntu) via CloudInit on AWS

    - by Gui Ambros
    I'm using CloudInit to automate the startup of my instances on AWS. I followed the (scarce) documentation available at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/annotate/head%3A/doc/examples/cloud-config.txt and examples on /usr/share/doc/cloud-init, but still haven't figured out how to change the default username (ubuntu, id:1000). I know I can create a script to manually delete the default ubuntu and add my user, but seems counter intuitive given that CloudInit exist exactly to automate the initial setup. Any ideas?

    Read the article

  • Why is ksoftirqd using 100% of the CPU?

    - by Yegor
    Running FC release 12. Im alaways seeing ksoftirqd/x (x being 0-9) at the top of the processlist, with 100% cpu. The server has a bonded 2gbit connection, serving files from an SSD array. Currently its using 1.6gbit. Server load is ~ 1.5 (dual quad core). iowait is non-existent.

    Read the article

  • Automatically kill processes that over time uses 95%+ of resources? Ubuntu

    - by data_jepp
    I don't know about you're computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process. This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.

    Read the article

  • C++ Simple thread with parameter (no .net)

    - by Marc Vollmer
    I've searched the internet for a while now and found different solutions but then all don't really work or are to complicated for my use. I used C++ until 2 years ago so it might be a bit rusty :D I'm currently writing a program that posts data to an URL. It only posts the data nothing else. For posting the data I use curl, but it blocks the main thread and while the first post is still running there will be a second post that should start. In the end there are about 5-6 post operations running at the same time. Now I want to push the posting with curl into another thread. One thread per post. The thread should get a string parameter with the content what to push. I'm currently stuck on this. Tried the WINAPI for windows but that crashes on reading the parameter. (the second thread is still running in my example while the main thread ended (waiting on system("pause")). It would be nice to have a multi plattform solution, because it will run under windows and linux! Heres my current code: #define CURL_STATICLIB #include <curl/curl.h> #include <curl/easy.h> #include <cstdlib> #include <iostream> #include <stdio.h> #include <stdlib.h> #include <string> #if defined(WIN32) #include <windows.h> #else //#include <pthread.h> #endif using namespace std; void post(string post) { // Function to post it to url CURL *curl; // curl object CURLcode res; // CURLcode object curl = curl_easy_init(); // init curl if(curl) { // is curl init curl_easy_setopt(curl, CURLOPT_URL, "http://10.8.27.101/api.aspx"); // set url string data = "api=" + post; // concat post data strings curl_easy_setopt(curl, CURLOPT_POSTFIELDS, data.c_str()); // post data res = curl_easy_perform(curl); // execute curl_easy_cleanup(curl); // cleanup } else { cerr << "Failed to create curl handle!\n"; } } #if defined(WIN32) DWORD WINAPI thread(LPVOID data) { // WINAPI Thread string pData = *((string*)data); // convert LPVOID to string [THIS FAILES] post(pData); // post it with curl } #else // Linux version #endif void startThread(string data) { // FUnction to start the thread string pData = data; // some Test #if defined(WIN32) CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)thread, &pData, 0, NULL); // Start a Windows thread with winapi #else // Linux version #endif } int main(int argc, char *argv[]) { // The post data to send string postData = "test1234567890"; startThread(postData); // Start the thread system("PAUSE"); // Dont close the console window return EXIT_SUCCESS; } Has anyone a suggestion? Thanks for the help!

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • POSIX AIO Library and Callback Handlers

    - by Charles Salvia
    According to the documentation on aio_read/write, there are basically 2 ways that the AIO library can inform your application that an async file I/O operation has completed. Either 1) you can use a signal, 2) you can use a callback function I think that callback functions are vastly preferable to signals, and would probably be much easier to integrate into higher-level multi-threaded libraries. Unfortunately, the documentation for this functionality is a mess to say the least. Some sources, such as the man page for the sigevent struct, indicate that you need to set the sigev_notify data member in the sigevent struct to SIGEV_CALLBACK and then provide a function handler. Presumably, the handler is invoked in the same thread. Other documentation indicates you need to set sigev_notify to SIGEV_THREAD, which will invoke the callback handler in a newly created thread. In any case, on my Linux system (Ubuntu with a 2.6.28 kernel) SIGEV_CALLBACK doesn't seem to be defined anywhere, but SIGEV_THREAD works as advertised. Unfortunately, creating a new thread to invoke the callback handler seems really inefficient, especially if you need to invoke many handlers. It would be better to use an existing pool of threads, similar to the way most network I/O event demultiplexers work. Some versions of UNIX, such as QNX, include a SIGEV_SIGNAL_THREAD flag, which allows you to invoke handlers using a specified existing thread, but this doesn't seem to be available on Linux, nor does it seem to even be a part of the POSIX standard. So, is it possible to use the POSIX AIO library in a way that invokes user handlers in a pre-allocated background thread/threadpool, rather than creating/destroying a new thread everytime a handler is invoked?

    Read the article

  • Creating customized .dmg files upon download

    - by Marten
    I want to distribute a cross-platform application for which the executable file is slightly different, depending on the user who downloaded it. This is done by having a placeholder string somewhere in the executable that is replaced with something user-specific upon download. The webserver that has to do these string replacements is a Linux machine. For Windows, the executable is not compressed in the installer .exe, so the string replacement is easy. For uncompressed Mac OS X .dmg files, this is also easy. However, .dmg files that are compressed with either gzip or bzip2 are not so easy. For example, in the latter case, the compressed .dmg is not one big bzip2-compressed disk image, but instead consists of a few different bzip2-compressed parts (with different block sizes) and a plist suffix. Also, decompressing and recompressing the different parts with bzip2 does not result in the original data, so I'm guessing Apple uses some different parameters to bzip2 than the command-line tool. Is there a way to generate a compressed .dmg from an uncompressed one on Linux (which does not have hdiutil)? Or maybe another suggestion for creating customized applications without pregenerating them? It should work without any input by the user.

    Read the article

  • Including variables inside curly braces in a Zend config ini file on Linux

    - by Dave Morris
    I am trying to include a variable in a .ini file setting by surrounding it with curly braces, and Zend is complaining that it cannot parse it properly on Linux. It works properly on Windows, though: welcome_message = Welcome, {0}. This is the error that is being thrown on Linux: : Uncaught exception 'Zend_Config_Exception' with message 'Error parsing /var/www/html/portal/application/configs/language/messages.ini on line 10 ' in /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php:181 Stack trace: 0 /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php(201): Zend_Config_Ini-&gt;_parseIniFile('/var/www/html/p...') 1 /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php(125): Zend_Config_Ini-&gt;_loadIniFile('/var/www/html/p...') 2 /var/www/html/portal/library/Ingrain/Language/Base.php(49): Zend_Config_Ini-&gt;__construct('/var/www/html/p...', NULL) 3 /var/www/html/portal/library/Ingrain/Language/Base.php(23): Ingrain_Language_Base-&gt;setConfig('messages.ini', NULL, NULL) 4 /var/www/html/portal/library/Ingrain/Language/Messages.php(7): Ingrain_Language_Base-&gt;__construct('messages.ini', NULL, NULL, NULL) 5 /var/www/html/portal/library/Ingrain/Helper/Language.php(38): Ingrain_Language_Messages-&gt;__construct() 6 /usr/local/zend/share/ZendFramework/library/Zend/Contr in We are able to get the error to go away on Linux if we surround the braces with quotes, but that seems like a strange solution: welcome_message = Welcome, "{"0"}". Is there a better way to solve this issue for all platforms? Thanks for your help, Dave

    Read the article

  • Embedded systems code with good unit tests?

    - by rmk
    I am looking at approaches to Unit Test embedded systems code written in C. At the same time, I am also looking for a good UT framework that I can use. The framework should have a reasonably small number of dependencies. Any great Open-source products that have good UTs?

    Read the article

  • Finding missing symbols in libstd++ on Debian/squeeze

    - by Florian Le Goff
    I'm trying to use a pre-compiled library provided as a .so file. This file is dynamically linked against a few librairies : $ ldd /usr/local/test/lib/libtest.so linux-gate.so.1 = (0xb770d000) libstdc++-libc6.1-1.so.2 = not found libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb75e1000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7499000) /lib/ld-linux.so.2 (0xb770e000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb747c000) Unfortunately, in Debian/squeeze, there is no libstdc++-libc6.1-1.so.* file. Only a libstdc++.so.* file provided by the libstdc++6 package. I tried to link (using ln -s) libstdc++-libc6.1-1.so.2 to the libstdc++.so.6 file. It does not work, a batch of symbols seems to be lacking when I'm trying to ld my .o files with this lib. /usr/local/test/lib/libtest.so: undefined reference to `__builtin_vec_delete' /usr/local/test/lib/libtest.so: undefined reference to `istrstream::istrstream(int, char const *, int)' /usr/local/test/lib/libtest.so: undefined reference to `__rtti_user' /usr/local/test/lib/libtest.so: undefined reference to `__builtin_new' /usr/local/test/lib/libtest.so: undefined reference to `istream::ignore(int, int)' What would you do ? How may I find in which lib those symbols are exported ?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >