Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 426/1048 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • Can't ping Ip over bridge

    - by tmn29a
    I'm unable to ping another host over a bridge I created, I can't see the error -.- It's a remote machine running debian stable with some backports for which I want to set up DHCP on the new Subnet 172.30.xxx.xxx to be used for KVM-Guests. ifconfig : bond0 Link encap:Ethernet HWaddr e4:11:5b:d4:94:30 inet addr:10.54.2.84 Bcast:10.54.2.127 Mask:255.255.255.192 inet6 addr: fe80::e611:5bff:fed4:9430/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:34277 errors:0 dropped:0 overruns:0 frame:0 TX packets:18379 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2638709 (2.5 MiB) TX bytes:2887894 (2.7 MiB) br0 Link encap:Ethernet HWaddr f2:fc:4d:7f:15:f0 inet addr:172.30.254.66 Bcast:172.30.254.127 Mask:255.255.255.192 inet6 addr: fe80::f0fc:4dff:fe7f:15f0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:252 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:10800 (10.5 KiB) Pings : ping -I br0 172.30.xxx.65 PING 172.30.xxx.65 (172.30.xxx.65) from 172.30.xxx.66 br0: 56(84) bytes of data. --- 172.30.xxx.65 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2017ms ping -I bond0 172.30.254.65 PING 172.30.xxx.65 (172.30.xxx.65) from 10.54.2.84 bond0: 56(84) bytes of data. 64 bytes from 172.30.x.65: icmp_req=1 ttl=64 time=0.599 ms 64 bytes from 172.30.x.65: icmp_req=2 ttl=64 time=0.575 ms 64 bytes from 172.30.x.65: icmp_req=3 ttl=64 time=0.565 ms --- 172.30.x.65 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.565/0.579/0.599/0.031 ms Route : Destination Gateway Genmask Flags Metric Ref Use Iface 172.30.x.64 * 255.255.255.192 U 0 0 0 br0 10.54.x.64 * 255.255.255.192 U 0 0 0 bond0 default 10.54.x.65 0.0.0.0 UG 0 0 0 bond0 default 172.30.x.65 0.0.0.0 UG 0 0 0 br0 The Interface : cat /etc/network/interfaces auto lo br0 iface lo inet loopback # Bonding Interface auto bond0 iface bond0 inet static address 10.54.x.84 netmask 255.255.255.192 network 10.54.x.64 gateway 10.54.x.65 slaves eth0 eth1 bond_mode active-backup bond_miimon 100 bond_downdelay 200 bond_updelay 200 iface br0 inet static bridge_ports bond0 address 172.30.x.66 broadcast 172.30.x.127 netmask 255.255.x.192 gateway 172.30.x.65 bridge_maxwait 0 If you need more info please ask. Thanks for your help !

    Read the article

  • using "touch" to create directories?

    - by user62367
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • How to safely use grub rescue> in Fedora 16? System does not boot anymore

    - by YumYumYum
    When i boot my PC, i get this in my Fedora 16 distro. I have tried as following but none allowing me to boot anymore. Any help please? I am blocked completely. Grub loading. Welcome to GRUB! error: file not found. Entering rescue mode... grub rescue> grub rescue> ls (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) grub rescue> ls (hd0,gpt2)/ ./ ../ lost+found/ memtest86+-4.20 grub2/ System.map-3.1.0-0.rc3.git0.0.fc16.i686 config 3.1.0.0.rc3.git0.0.fc16.i686 grub/ vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 elf-memtest86+-4.20 initramfs-3.1.0.0.rc3.git0.0.fc16.i686.img initramfs-3.1.0.0.rc4.git0.0.fc16.i686.img System.mpa-3.1.0.0.rc3.git0.0.fc16.i686 config-3.1.0.0.rc3.git0.0.fc16.i686 vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 grub rescue> set prefix=(hd0,gpt2)/boot/grub grub rescue> set root=(hd0,gpt2) grub rescue>insmod normal error unknown filesystem. or sometimes "error: file not found." grub rescue>normal unknown command normal

    Read the article

  • Is it reasonable to make a RAID-1 array with a ram disk and a physical disk to maximize read performance and protect data?

    - by Petr Pudlák
    In one of the answers on SO (I forgot which one) I've seen a suggestion to make a RAID-1 array composed of a RAM disk and a physical partition. By adding the physical partition with --write-mostly and enabling --write-behind the system should read everything instantly from the RAM disk but still save all data to the physical partition so that the data are preserved and the RAID array can be assembled again after reboot. Is such a setup reasonable? Will it perform any better in some scenario than having just the physical partition and perhaps tweaking the kernel to favor disk cache (swappiness and vfs_cache_pressure)?

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • Switch between network configurations via command line in fedora 17

    - by Mike Fairhurst
    I have two different setups I use on my work laptop; one enables synergy over an ethernet ssh tunnel with my work computer on the local network, and the other opens an HTTP tunnel to my work computer from outside the network. When I have wifi enabled at work, my laptop seems to use it by preference. This makes synergy run incredibly slowly. At home I must use wifi. I have scripts that begin my ssh tunnels, add my ssh keys, and starts up other programs like synergy, and close themselves when I shut my laptop. However, every day I have to start out my routine by opening my gnome-control-center and turning on my ethernet. I have tried route add and ifup, none of it works, so I dove into gnome-control-center's source code and found that it enabled the connection by libnm's method nm_client_activate_connection with some libnm specific structs that I am having trouble tracking down. I'm not much of a c programmer, and I'm not familiar with either GTK or libnm. Does anybody know what fedora 17 does with ethernet connections to fully enable them? Or does anybody know what libnm does to fully enable an ethernet connection? Do I have to write a c script to run libnm for me to fully emulate whatever gnome-control-center is trying to do?

    Read the article

  • Debian Wheezy: installing from sources or repositories? upgrading to new software release?

    - by user269842
    a. I'm wondering for some software if it is wiser to install them from sources or from official repositories when available like: glpi inventory fusion inventory monitoring tools like nagios I tried both for glpi: compiled from sources and installing from repositories. I also installed zabbix from sources. b. What about new software releases providing enhancements: is it better to keep the release installed from the repositories /compiled or is their a 'best practice' like downloading the new software release and compiling it again (I really have no clue)? Could someone make it more clear for me? Thanks!

    Read the article

  • How to display password policy information for a user (Ubuntu)?

    - by C.W.Holeman II
    Ubuntu Documentation Ubuntu 9.04 Ubuntu Server Guide Security User Management states that there is a default minimum password length for Ubuntu: By default, Ubuntu requires a minimum password length of 4 characters Is there a command for displaying the current password policies for a user (such as the chage command displays the password expiration information for a specific user)? > sudo chage -l SomeUserName Last password change : May 13, 2010 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 This is rather than examining various places that control the policy and interpreting them since this process could contain errors. A command that reports the composed policy would be used to check the policy setting steps.

    Read the article

  • How to increase acpiphp slots?

    - by Eil
    Oh RHEL 5.5, there are 31 ACPI PCI hotplug slots by default: acpiphp: Slot [1] registered ... acpiphp: Slot [31] registered Is there a way to increase this number? I haven't been able to find an argument to supply to modprobe, or a sysctl knob to tweak, but I know there must be ways to get more slots based on some Google sleuthing. (For the curious, this is just preliminary experimentation to see how many virtual disks I can hot-add to a running KVM guest.)

    Read the article

  • iptables rule on INPUT between 2 ethernet cards on the same host

    - by user1495181
    I have 2 eth cards on the same host. Both connected directly with LAN cable. I set eth0 with ip - 192.168.1.2 I set eth1 with ip - 192.168.1.1 I set this rule: iptables -A INPUT -p tcp -j NFQUEUE --queue-num 0 There are no other rules. (I ran iptables -X,-F) I send TCP syn packet ( with c++ program by using raw socket) from 192.168.1.2 to 192.168.1.1 In wireshark i see that the packet received on eth0, but the iptables rule (above) dosnt apply for this packet. when i sent the packet to remote host and apply this rule on the remote host than it work correct. So, i guess that this is due to the fact that both eth cards exists the same host. . I need to create iptables INPUT rule for local eth card (dest and src on the same machine ). I need it for simplify test. Did i guess the problem correct? is there a way to bypass this? Ps - connected them via switch didn't help. the rule wasn't applied. Run on Ubuntu. TCDUMP show the packet: 10:48:42.365002 IP 192.168.1.2.38550 > 192.168.1.1.34298: Flags [S], seq 0, win 5840, length 0 but logging of iptables like this, has nothing: iptables -A INPUT -p tcp -j LOG --log-prefix '*****************' iptables -A OUTPUT -p tcp -j LOG --log-prefix '#################'

    Read the article

  • Linux : Ubuntu n'est plus la distribution la plus populaire d'après DistroWatch, la faute à Unity ? Montée fulgurante de Mint

    Linux : Ubuntu n'est plus la distribution la plus populaire d'après DistroWatch La faute à Unity ? Montée fulgurante pour Mint La distribution Linux Ubuntu perd du terrain selon le rapport annuel de DistroWatch mis à jour cette semaine. Sur les 12 derniers mois, c'est Mint Linux qui domine le tableau des distributions les plus populaires. Ubuntu y arrive deuxième, mais son déclin s'accélère au dernier mois pour se classer quatrième derrière Fedora,

    Read the article

  • rsync without password, none of google (server fault) tutorials worked

    - by Jake Armstrong
    I need to use rsync for a daily backup operation and in the past (on different servers) I managed to just use a rsa key etc, but now none of google (serverfault) tutorials work at all. It keeps asking me for a password. I have webmin and ssh/root access to both servers. My steps: create a key on server 1 send key.pub to server 2 add key.pub to .ssh/authorized_keys chmod 700 .ssh/authorized_keys go back to server 1 and try rsync and it keep asking for password... rsync command: rsync -avz -e ssh file.txt root@server2:/root EDIT: well, I cleaned up everything and this time, instead of inserting a custom name to the key I used the standard one on server1. sent the .pub to server2 and it worked as a charm... So the answer is that server1's ssh wasn't even using the right key...

    Read the article

  • Compiz: Switching focus by application instead of by window

    - by Ivan Vucica
    I got used to OS X way of doing things (separate shortcuts for switching between applications and switching between current application's windows). Is there a way to get Compiz to have a shortcut (such as Super+Tab) to switch between applications ("window groups") instead of between windows? I already got the "Scale" plugin (an expose clone) to display only windows from current window group, proving there is a way to group by application, but I cannot find a way to get the "Application Switcher" to switch between these groups instead of between windows themselves.

    Read the article

  • The best way to make full system dump on Centos [duplicate]

    - by tester3
    This question already has an answer here: Centos 5 Full backup 1 answer I am on Centos 6.5 with a lot of soft and services installed and working. Also I've got a lot of configs which damaged my brain and I dont want to do it again:) So, can anyone please advice the best way to make a full system dump with all data, so I need only to copy-paste them to new system to get my system ready on the other machine. Or something like that? P.S. Data on my hdd is encrypted, and I'd liked and encrypted dump too. Please help:)

    Read the article

  • What is the best way to handle the multitude of different logs created all around the place?

    - by Low Kian Seong
    I run a few applications which creates their own logs. Then I run cron scripts on the same server to do importing of data for my app. When these cron errors out, the default is it sends emails to the user that runs the cron job. There are just too many places that I need to check the logs and mails for stuff that might have potentially went wrong. My question is, what is the best way to do this or even better is like a log parser application which will go through all the system logs when something really goes wrong instead of me having to go through it daily?

    Read the article

  • How to remove bad disk from LVM2 with the less data loss on other PVs?

    - by Walkman
    I had a LVM2 volume with two disks. The larger disk became corrupt, so I cant pvmove. What is the best way to remove it from the group to save the most data from the other disk? Here is my pvdisplay output: Couldn't find device with uuid WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3. --- Physical volume --- PV Name unknown device VG Name media PV Size 1,82 TiB / not usable 1,05 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 476932 Free PE 0 Allocated PE 476932 PV UUID WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3 --- Physical volume --- PV Name /dev/sdb1 VG Name media PV Size 931,51 GiB / not usable 3,19 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID oUhOcR-uYjc-rNTv-LNBm-Z9VY-TJJ5-SYezce So I want to remove the unknown device (not present in the system). Is it possible to do this without a new disk ? The filesystem is ext4.

    Read the article

  • How to understand cpu family/model/stepping fields in /proc/cpuinfo [closed]

    - by Victor Sorokin
    I have following in cpuinfo: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ stepping : 2 According to Wikipedia page there are two kinds of 5600+ -- one of 90nm technology, another of 65nm. How can I understand which one I have? There seem to be no direct correspondence between contents of cpuinfo and info on Wikipedia page. AMD site seems to use some other naming scheme for processors too. How can I map values of family, model and stepping from cpuinfo to the data available on Wikipedia/AMD?

    Read the article

  • Fixing Broken Groups

    - by themaestro
    Hey, I just got onto a new project with the student government at my University and we're trying to get our webserver into a more workable state. The current problem is that all of us for some reason have sudo power on the server, but we can't write/create files anywhere on the server (as far as we can tell) currently. Our groups are currently as follows: /srv/ice/db$ groups goshri sshamim rmenezes goshri : goshri sshamim : sshamim ptx rmenezes : rmenezes ptx daifotis : daifotis ptx We added a few of us to ptx because we thought that might give us write access but it didn't. We have a bunch of webapps running on this server but since it's university things change hands quickly. What can we do to give us read access?

    Read the article

  • getting a weird error whenever I try restarting apache

    - by Binny Zupnick
    I'm trying to install apache, php5, mysql, and phpmyadmin. And I'm following a tutorial but this error keeps happening here's the error: apache2: Syntax error on line 227 of /etc/apache2/apache2.conf: Could not open configuration file /etc/phpmyadmin/apache.conf: No such file or directory Action 'configtest' failed. The Apache error log may have more information. ...fail! I've tried removing all of them and reinstalling but, to no avail. I'm pulling my hair out over this so, thanks in advance! =) edit: during the tutorial I screwed up and deleting something, lol, so I know that's the issue I just don't know what to do about it now

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >