Search Results

Search found 27515 results on 1101 pages for 'embedded linux'.

Page 430/1101 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • how to remove background layer of djvu file

    - by Jon
    Hello, I've downloaded some files from the Internet Archive. They come in different file formats and most of the time I use pdf. However, sometimes the scans are saves in colour instead of b/w. This makes it difficult/impossible to read on a dedicated ebook reader. In that case I downloaded the djvu files as on the PC you can select which layer (color, bw,fore,back) one would like to see. Selecting the bw gives excellent results. However, the ebook reader does not has this option. The question is, how can I remove /extract a layer from the djvu file and save only this layer. So far I've tried the following two approaches: 1) select bw in the djvu viewer on the PC and printed to postscript file. Followed by a ps2pdf conversion. This works, but generates a fairly large pdf file. Sure, I can again upload it to any2djvu but it just seems to much manual work for each file. 2) I tried the shared annotation feature and said (mode bw). This works on the PC as desired but is ignored on the ebook reader as the other layers are still present. Any help or suggestions would be greatly appreciated.

    Read the article

  • How to force rsync to use destination directory as root

    - by thepurplepixel
    I have a simple script to one-way-sync files/folders within a directory: #!/bin/bash HOST='<hostname>' USER='<username>' DIR='/downloads/' SOURCE='/srv/torrents' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats --progress -i $SOURCE $HOST:$DIR find $SOURCE -type d -empty -prune -exec rmdir -p \{\} \; However, when this rsync operation runs, it creates a folder, torrents in /downloads on the destination machine. How can I force rsync to put all folders & files from /srv/torrents (remote) into /downloads/ (local) instead of creating /downloads/torrents as a separate directory?

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

  • How can I enforce directory space limits in a OpenVPZ system?

    - by George
    The title says it all. I have some programs on a server (centos4-openvz) that use a directory as temp directory - but pay no attention to the size it grows. I want to enforcee a limit, like this folder cannot exceed 300mb. I would use quota but OpenVZ does not support loop devices to mount a file as such. Any other solutions? (apart from scripting a periodic delete of files in the directory). Editing the application's code to implement such a functionality is not entirely out of the question - if it can be done easily and no other ways exist.. Its written in cpp - but I don't know how to implement it.

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • startx error no desktop manager

    - by WikiWitz
    I have Backtrack 5R2 KDE. I started recovery mode and did a failsafe xorg configuration. After that, I cannot load the KDE manager when I enter the startx command after logging in. Whenever I do a startx command (as root), the result resembles the following: This is not the actual output (I just drew this with MS paint because I cannot do a printscreen). The screen is just black with the icon in the upper left corner. The other pop-up menu appears when left-clicking the mouse. I tried the cp xorg.conf.failsafe xorg.conf advice from other websites with no luck. I have also tried the 'reconfigure option(s)' form the recovery mode with no success.

    Read the article

  • Remote Program (via ssh) suspends when leaving client computer

    - by Philipp F
    I'm working with MATLAB on a remote computer logging in via ssh -X remotepc and running matlab like matlab &. When I start a long-running process and leave the computer, it seems to suspend the process (after like 30mins being away) such that there is nearly no progress over night. As soon as I come back and wake up the client, the remote process continues with the calculation. I can see this from the load-average values (uptime) Why is that and how can I change this behaviour?

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • Why can't I get out of display mirror mode?

    - by Roy Smith
    I've been running Ubuntu (10.04.1 LTS, 64-bit) for a while and just replaced my hardware with a faster machine with an ATI Radeon HD 5700 video card. I've got twin 1920 x 1080 displays. I downloaded the latest driver (ati-driver-installer-10-9-x86.x86_64.run) from the ATI web site and installed that. I've gone through a few rounds of playing with /etc/X11/xorg.conf, and can't get things right. At the moment, it's in display mirroring mode, and I can't figure out how to get it out of mirror mode. If I run Monitor Preferences, there's a "Same image in all monitors" checkbox. If I uncheck that, the little preview window switches to show two monitors. When I click Apply, it asks me to log out and log back in again. When I do that, I'm right back to mirrored mode. What's really weird is that I'm currently running a copy of xorg.conf from a coworker's machine. He's got identical hardware, and his display works fine. So, I'm inclined to think there's something else going on other than the conf file. Any ideas what might be wrong?

    Read the article

  • FTP server (vsftpd) with webgui

    - by manutenfruits
    I want to build a file server to make users able to upload and download mostly multimedia, but also common files. Right now I have an Arch installation with vsftpd and I'm about to install miniDLNA for multimedia sharing. The only problem is that FTP doesn't seem to fit my needs, because almost always makes the users need a client such as FileZilla to make the server friendly. I have been looking for a web frontend for vsftp but apart from management interfaces there's nothing. I need a frontend accessible from a browser through which users can navigate throught the folders in an easier and more elegant way than the plain FTP display that browsers make by default. It should be able to let users upload files and, as an awesome extra, let them play the multimedia directly on the browser. For this, I am willing to dump FTP if needed, I've heard about HTTP File Servers but don't know too much about it. I could code everything myself, but there's gotta be something out there already.

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • CentOS PHP Sessions not working even though the PHP Info page says it is

    - by Blake
    I have PHP installed properly from the Remi repo on CentOS 6 (64 bit). As shown in the image above, the PHP information page shows sessions as working and installed, yet I get this error: Fatal error: Call to undefined function session_create() in /var/www/lighttpd/index.php on line 1 I've tried multiple reinstalls, different PHP RPM's, and yet nothing will get sessions going. How can I get PHP sessions working?

    Read the article

  • some novice questions about VPS

    - by Camran
    I have ordered a VPS today, and I am still setting it up. I have some Q though: Also, I have just installed apache, php and mysql. 1- I don't understand which folder my website should be in. Is it the 'www' folder or the htdocs folder I should upload my site to? Because they are in entirely different directories. 2- How do I even upload files? FTP program? How do I install one, and which one? Thanks

    Read the article

  • Rsync and Lazy mode ?

    - by fabien-barbier
    Since transferring or copying a file that is being used sometimes causes corruption of the transferred file, can we define a time interval in which Rsync checks each file in a given directory to see if there is a change within that time interval ? Files that are not changed during that interval will be transferred, while those that have changes will not. Can I do that with rsync ? Or another tool ? Is there a script to add this functionality to Rsync ? Thanks

    Read the article

  • Creating software raid on spare internal drives with Fedora

    - by Wizzard
    Hi there, I got two internal 80GB drives which are blank and just sitting in the case. I have tried googling for the steps or some info but I can only find out how to setup raid when I am first installing Fedora - not for doing when already setup. These are two new (old) drives, that are blank, the system is not on them so should really just be as simple as formating and then binding them to a raid - but can't find any information. Any clues?

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

  • NTP daemon or ntpdate doesn't synchronize

    - by user2862333
    I'm having some problems with synchronization with an NTP server. 1) The NTP daemon doesn't sync the system clock at all, even though it's running (confirmed with /etc/init.d/ntp status). Forcing to sync with ntpd -q or ntpd -gq does not work either. 2) Stopping the NTP daemon and syncing manually with ntpdate does give me the following output: ~# ntpdate -d 0.debian.pool.ntp.org 6 Nov 16:48:53 ntpdate[4417]: ntpdate [email protected] Sat May 12 09:07:19 UTC 2012 (1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) server 79.132.237.5, port 123 stratum 2, precision -20, leap 00, trust 000 refid [79.132.237.5], delay 0.05141, dispersion 0.00145 transmitted 4, in filter 4 reference time: d624e3b1.f490b90d Wed, Nov 6 2013 16:50:09.955 originate timestamp: d624e457.eaaf787c Wed, Nov 6 2013 16:52:55.916 transmit timestamp: d624e36c.4a7036fd Wed, Nov 6 2013 16:49:00.290 filter delay: 0.08537 0.05141 0.05151 0.06346 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6038 235.6087 235.6095 235.6068 0.000000 0.000000 0.000000 0.000000 delay 0.05141, dispersion 0.00145 offset 235.608782 server 85.234.197.2, port 123 stratum 2, precision -20, leap 00, trust 000 refid [85.234.197.2], delay 0.05151, dispersion 0.00336 transmitted 4, in filter 4 reference time: d624e3e7.dc6cd02b Wed, Nov 6 2013 16:51:03.861 originate timestamp: d624e458.1c91031f Wed, Nov 6 2013 16:52:56.111 transmit timestamp: d624e36c.7da1d882 Wed, Nov 6 2013 16:49:00.490 filter delay: 0.05765 0.07750 0.06013 0.05151 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6048 235.6014 235.6035 235.6078 0.000000 0.000000 0.000000 0.000000 delay 0.05151, dispersion 0.00336 offset 235.607826 server 194.50.97.34, port 123 stratum 3, precision -23, leap 00, trust 000 refid [194.50.97.34], delay 0.03021, dispersion 0.00090 transmitted 4, in filter 4 reference time: d624e38d.2bce952c Wed, Nov 6 2013 16:49:33.171 originate timestamp: d624e458.4dbbc114 Wed, Nov 6 2013 16:52:56.303 transmit timestamp: d624e36c.b0d38834 Wed, Nov 6 2013 16:49:00.690 filter delay: 0.03030 0.03636 0.03091 0.03021 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6095 235.6085 235.6098 235.6105 0.000000 0.000000 0.000000 0.000000 delay 0.03021, dispersion 0.00090 offset 235.610589 server 79.132.237.1, port 123 stratum 3, precision -20, leap 00, trust 000 refid [79.132.237.1], delay 0.05113, dispersion 0.00305 transmitted 4, in filter 4 reference time: d624dfcb.6acea332 Wed, Nov 6 2013 16:33:31.417 originate timestamp: d624e458.838672ad Wed, Nov 6 2013 16:52:56.513 transmit timestamp: d624e36c.e405181c Wed, Nov 6 2013 16:49:00.890 filter delay: 0.06345 0.05113 0.05681 0.05656 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6087 235.6038 235.6010 235.6074 0.000000 0.000000 0.000000 0.000000 delay 0.05113, dispersion 0.00305 offset 235.603888 6 Nov 16:49:00 ntpdate[4417]: step time server 79.132.237.5 offset 235.608782 sec Clearly, ntpdate can reach the NTP server(s), but after checking the clock, it hasn't changed and is still displaying the wrong time. Any ideas what would be the problem would be much appreciated.

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • CentOS send mail with external SMTP server and without local daemons

    - by Vilx-
    I've got a little old server with CentOS 6.5 on it. The hardware is old and crappy, but enough for what it has to do. Which consists of SSH (+SFTP), Apache, PHP and MySQL. Still, I'm trying to cut away all that I can. One thing that it does not need to do is to be an SMTP server. There are no mailboxes on it and nobody will ever route mail through it. However I do want it to send me an email when something goes wrong. Also, the webpages will send emails from PHP. So that brings me to the question - can I set up the mail system in such a way that there isn't an expensive mailer daemon sitting in the background with queues and whatnotelse, but rather every email is directly and immediately delivered to an external SMTP server? And how do I go about it?

    Read the article

  • How to find proper codec for Xubuntu?

    - by smwikipedia
    I have just installed the Xubuntu. And I feel that to use it play a mp3 is like kill myself twice. I try to play it with Exaile, the boxed player with Xbuntu. But it says I need to install some mpeg codecs. I found so many depends with sudo apt-cache depends. How to install them? one by one?! Many thanks.

    Read the article

  • What are the minimum required modules to run WordPress

    - by Mister IT Guru
    Recently a 'consultant' came in to talk to bean counters at my place of employment, with regards to being more efficient with our IT infrastructure. They suggested to be more efficient we should only load the Apache modules that are required on our web servers. (This is just 1 of 1Ks of suggestions). The Bean Counters are very excited, and prepared for me to spend the time to investigate this avenue of cost cutting. I don't mind this mundane exercise, I see it as a learning experience! I guess this leads me to the actual question: How can I determine the minimum required apache modules for a PHP based application without actually going through the code, or plain old trial and error?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >