Search Results

Search found 24933 results on 998 pages for 'arch linux'.

Page 393/998 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How to understand cpu family/model/stepping fields in /proc/cpuinfo

    - by Victor Sorokin
    I have following in cpuinfo: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ stepping : 2 According to Wikipedia page there are two kinds of 5600+ -- one of 90nm technology, another of 65nm. How can I understand which one I have? There seem to be no direct correspondence between contents of cpuinfo and info on Wikipedia page. AMD site seems to use some other naming scheme for processors too. How can I map values of family, model and stepping from cpuinfo to the data available on Wikipedia/AMD?

    Read the article

  • Encrypt tar file asymmetrically

    - by DerMike
    I want to achieve something like tar -c directory | openssl foo > encrypted_tarfile.dat I need the openssl tool to use public key encryption. I found an earlier question about symmetric encryption at the command promt (sic!), which does not suffice. I did take a look in the openssl(1) man page and only found symmetric encryption. Does openssl really not support asymmetric encryption? Basically many users are supposed to create their encrypted tar files and store them in a central location, but only few are allowed to read them.

    Read the article

  • How to Configure Sendmail / Webmin for second IP?

    - by user310594
    Hi, LAMP Centos5.4 Webmin Until recently I have had all domains using "server1.example.com" Now I have newdomain.com on second.ip.address.works (works for DNS that is) Please tell me how to setup sendmail so the mail is sent from the second ip address? This is new for me: IF I need to create a second server called "server2.domain2.com", then please tell exactly how since I'm only experienced with one server per VPS. Whether "server2.domain2.com" needs to be created or not, here is exactly what is needed: # Mail being sent from domains using ns1.example.com needs to be sent from that server and that IP. Mail being sent from domains using nsother.example2.com sent from that IP + how to set up the second server / hostname, if needed. Thank you.

    Read the article

  • Adding user to chroot environment

    - by Neo
    I've created a chroot system in my Ubuntu using schroot and debrootstrap, based on minimal ubuntu. However whenever I can't seem to add a new user into this chroot environment. Here is what happens. I enter schroot as root and add a new user.(Tried both adduser and useradd commands) The username lists up in /etc/passwd file and I can 'su' into the new user. So far so good. When I log out of schroot, and re-enter schroot, the user I created has vanished!! There is no mention of that user in /etc/passwd either. How do I make the new user permanent?

    Read the article

  • SSL Handshake negotiation on Nginx terribly slow

    - by Paras Chopra
    I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/ Here is my Nginx Conf: user www-data; worker_processes 4; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_proxied any; server_names_hash_bucket_size 128; } upstream abc { server 1.1.1.1 weight=1; server 1.1.1.2 weight=1; server 1.1.1.3 weight=1; } server { listen 443; server_name blah; keepalive_timeout 5; ssl on; ssl_certificate /blah.crt; ssl_certificate_key /blah.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://abc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?

    Read the article

  • Very high memory usage, but not claimed by any process?

    - by SharkWipf
    While stress-testing LVM on one of our Debian servers, I came across this issue where memory would fill up a lot to the point where it would run the server out of memory, but no process would claim the memory. See http://i.imgur.com/cLn5ZHS.png, and see http://serverfault.com/a/449102/125894 for an explanation on the colors used in htop. Why is this happening? And is there any way to see what process is using the memory? Htop is configured not to hide any processes, so what is it that htop is missing? In this particular case, I can fairly certainly say that it is caused, directly or indirectly, by lvmcreate, lvmremove or dmsetup, as I was stress-testing that. Do note that this question is not about solving the LVM problem, but about why the memory isn't claimed by any process. Stopping all LVM commands does bring the memory back down to <600MB.

    Read the article

  • OSS Router firmwares

    - by Cherian
    DD-WRT, Open WRT , Tomato or Third-party firmware projects ? What are the compelling reasons to choose between these? I used to be a great DD-WRT fan until I realized that the author was deceiving users by publishing it as a OSS, but made it very cumbersome to download the source and change it (requires you to download GB’s of source files) .Also their bandwidth monitoring feature was part of the paid version, which IMHO is a killer. Having said that, DD-WRT just worked. And I think that’s great..

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • lamp server permissions on development server

    - by user101289
    I run a LAMP server on a ubuntu laptop I use only for development. I am not greatly concerned with security, since the server is never accessible outside the local network, and it's turned off when I'm not using it. My question is what is the simplest and 'best' way to set permissions/users/groups so that when my myself user creates, edits or writes files in the webroot, I won't need to go through and CHMOD / CHOWN everything back to the www-data user? Should I add myself to the www-data group? Or chown the webroot to www-data:myself? Or is there a best practice for this situation so I don't have to keep re-setting the ownership of these files? Thanks

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Ubuntu/ Gnome : Open an application is a specific workspace

    - by bguiz
    How do tell an application to open in a specific workspace? More info: I like to have my C++ IDE in workspace 2, my Java IDE in workspace 3, and my email, browser and miscellaneous in workspace four. I also use a shell script that executes upon log in: #!/bin/bash gnome-terminal & # WS 1 netbeans-6-9-1 & # WS2 qtcreator-2-0-1 & # WS 3 firefox & # WS 4 thunderbird & # WS 4 Of course currently it all opens in the curent workspace... Is there a way for me to specify which workspace each command should start in? Thanks in advance!

    Read the article

  • mount multiple folders with nfs4 on centos

    - by microchasm
    I'm trying to get nfs4 working here. Machine 1 (server) I have a folder and in it 2 other folders I'm trying to share independently. /shared/folder1 /shared/folder2 Problem is, I can't seem to figure out how to mount the folders independently on the client. (Machine 1 - server) /etc/exports: /var/shared/folder1 192.168.200.101(rw,fsid=0,sync) /var/shared/folder2 192.168.200.101(rw,fsid=0,sync) ... exportfs -ra (Machine 2 - client) /etc/fstab: 192.168.200.201:/folder1/ /home/nfsmnt/folder1 nfs4 rw 0 0 ... mount /home/nfsmnt/folder1 mount.nfs4: 192.168.200.201:/folder1/ failed, reason given by server: No such file or directory The folder is there. I'm positive. I think there is something simple I'm missing, but I'm totally missing it. It seems like there should be a way in fstab to tell nfs which folder on the server I want to mount. But I can only find references to what looks like a root mount point (e.g. 192.168.1.1:/) which I assume is handled by exports on the server. But even with the folders set up in exports, there doesn't seem to be an apparent way to pich and choose which gets mounted. Is it not possible to mount separate folders from the same server to different mount points on the client? Any help appreciated.

    Read the article

  • How to share drive space from vmware server 2 host to a guest?

    - by matnagel
    In the vmware tools in the guest there is an option to access shares from the host. What is the way to create such shares on a vmware 2 host? I did not find where in infrastructure web access. I also went through the vmware server 2 user's guide but did not see it mentioned. Can you help? This is an ubuntu 64 bit server 8.04 LTS host.

    Read the article

  • How to write image of a floppy disk to a flash drive?

    - by Usman Ajmal
    I have created an image of a floppy disk by executing: dd if=/dev/fd0 of=/home/myFloppy.img My floppy disk is no more working now. So I am thinking now if it's possible to write the image of that floppy to a flash drive and then i may boot my machine from the flash drive. My machine's BIOS has the option of 'Boot from USB'.

    Read the article

  • How to query a DHCP server to get the local DNS servers

    - by Dan Berlyoung
    I have a ClarkConnect (CentOS based) box running as my home router on a RR connection. I had the DNS servers set up to use Google's DNS server. I want to change them back to the local DNS servers but I can't find an obvious/easy way to get those address short of a) reconfiguring the router's network to DHCP them (would rather not interrupt everyone) or b) calling their tech support (kill me now!). Is there a command line tool/command I can use to query the DHCP server on the external NIC to see what DNS servers it would set me up with w/o munging my existing setup?

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • Streaming audio/video in a publicly-hosted server increases bandwith usage

    - by Eka
    I have a website hosted in a public server (withoud any streaming content) ,using public hosting instead of private because its cheaper. But in public hosting their are limitations when compared to private hosting such as monthly bandwidth usage (1 GB), disk space, cpu usage etc. I am planning to embedd videos and audios (from other websites like youtube) to my already existing website. My question is if a client streams a embedded video/audio (hosted in another website) from my website any change in bandwidth occurs.

    Read the article

  • Alternative to the tee command whitout STDOUT

    - by aef
    I'm using | sudo tee FILENAME to be able to write or append to a file for which superuser permissions are required quite often. Although I understand why it is helpful in some situation, that tee also sends its input to STDOUT again, I never ever actually used that part of tee for anything useful. In most situations, this feature only causes my screen to be filled with unwanted jitter, if I don't go the extra step and manually silence it with tee 1> /dev/null. My question: Is there is a command arround, which does exactly the same thing as tee, but does by default not output anything to STDOUT?

    Read the article

  • Foremost custom file type not accepted by -t argument

    - by Channel72
    I'm trying to recover a deleted file on an ext3 file system using the foremost utility. The file I want to recover is a hpp C++ source code file. However, foremost does not automatically support the hpp file extension, so I have to add it to the config file. So, following the instructions on the man page, I add the following line to the config file: hpp n 50000 include include ASCII Then I run foremost as follows: $foremost -v -T -t hpp -i /dev/md0 -o /home/recover/ Instead of doing anything, it just displays the help message. If I change the hpp to htm or jpg, it works. So apparently foremost isn't accepting the custom file type I added into the config file. But I've looked over this dozens of times now, and I can't see what I'm doing wrong. I'm following the instructions exactly. Why doesn't foremost recognize the new file type I added to the config file?

    Read the article

  • Ubuntu list of packages installed

    - by becomingGuru
    I am moving to a new laptop with Ubuntu Lucid from an old laptop that has Ubuntu Karmic. I want to look at the list of all the packages and selectively install them all on the new laptop. What is the best method to go about it. Thanks in advance.

    Read the article

  • How do I set up a virtual host?

    - by user1698332
    My router redirects port 80 to port 8080. This is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /home/admins/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> I can access my website by entering "mywebsite.com:8080" but I cannot access it by entering "mywebsite.com" For further information, this is a part of my httpd.conf: Listen 8080 Servername localhost:8080 DocumentRoot "/home/admins/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/home/admins/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >