Daily Archives

Articles indexed Thursday June 28 2012

Page 1/18 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

  • Running command transparently over ssh

    - by jnsg
    By transparently I mean forwarding of: stdin, stdout and stderr standard signals (SIGHUP or SIGINT would be great for a start) As an example, consider these invocations of a (pointless) local and remote command: $ `cat - > /dev/null; sleep 10` < /local/file $ ssh user@host "cat - > /dev/null; sleep 10" < /local/file I can interrupt the first one with ^C just fine. But if I try this during the second one it only affects ssh, leaving the command running on the remote server if cat has already finished. I know about launching sshwith -t, but this way I can't send data via stdin. Is this possible with ssh alone at all?

    Read the article

  • Mail Enabled Sharepoint 2010 Loop Detected

    - by vlannoob
    I have setup a small Sharepoint 2010 deployment and it is working fine, for now. I have run through one of the more popular step by step guides to mail enable the install and what I have is internal and external mail going to my mail enabled list hitting my Exchange 2010 server (on another Win2k8R2 box) and sitting in the submissions queue with a Loop Detected error and they progres no further. Everything appears OK as per the guide. I have setup an SMTP role on the Sharepoint box, as per the guide. I have setup a new Send Conenctor on the Exchange 2010 server, as per the guide. Any ideas on troubleshooting here?

    Read the article

  • open mysql to any connection on ubuntu

    - by ThomasReggi
    I simply want to open up mysql to be accessible from any server ip. I have already commented out the bind-address in /etc/mysql/my.conf. I have already setup the user account within mysql. I have no clue whats stopping me from connecting. The more challenging I see this being the more I realize how much of a security risk it is, and I get that, I just want to be able to do it temporarily. I think that the iptables firewall is the last thing that is preventing me from achieving this, but sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT is seemingly doing nothing.

    Read the article

  • Configure IPv6 routing

    - by godlark
    I've got IPv6 addresses from SIXXS. My host is connected with SIXXS network over a AICCU tunnel ("sixxs" interface). My host address is 2001:::2, the host on the end has address 2001:::1. On my host IPv6 is fully accessible. I have problem with configuring IPv6 network on VMs. I use VirtualBox, the VM (Ubuntu) uses tap1 (on the host bridged by br0) #!/bin/sh PATH=/sbin:/usr/bin:/bin:/usr/bin:/usr/sbin # create a tap tunctl -t tap1 ip link set up dev tap1 # create the bridge brctl addbr br0 brctl addif br0 tap1 # set the IP address and routing ip link set up dev br0 ip -6 route del 2001:6a0:200:172::/64 dev sixxs ip -6 route add 2001:6a0:200:172::1 dev sixxs ip -6 addr add 2001:6a0:200:172::2/64 dev br0 ip -6 route add 2001:6a0:200:172::2/64 dev br0 Host: routing table: 2001:6a0:200:172::1 dev sixxs metric 1024 2001:6a0:200:172::/64 dev br0 proto kernel metric 256 2001:6a0:200:172::/64 dev br0 metric 1024 2000::/3 dev sixxs metric 1024 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sixxs proto kernel metric 256 fe80::/64 dev br0 proto kernel metric 256 fe80::/64 dev tap1 proto kernel metric 256 default via 2001:6a0:200:172::1 dev sixxs metric 1024 Guest: interface eth1 (it is connected with tap1): auto eth1 iface eth1 inet6 static address 2001:6a0:200:172::3 netmask 64 gateway 2001:6a0:200:172::2 Guest: routing table 2001:6a0:200:172::/64 dev eth1 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth1 proto kernel metric 256 default via 2001:6a0:200:172::2 dev eth1 metric 1024 The guest pings to the host, the host pings to the guest, the host pings to 2001:6a0:200:172::1, but the guest doesn't ping to 2001:6a0:200:172::1. The guest tries to ping, on the host (by tcdump) I can capture its packets, but the host doesn't send them to 2001:6a0:200:172::1. What have I missed in configuration?

    Read the article

  • How to verify a self-signed certificate from another server using openssl?

    - by ntsue
    I am new to openssl and I am having some trouble verifying (from a client machine) an ftp server using ssl with a self-signed certificate. I generated the .cer file by going to my server in IIS and exporting the certificate without the private key. I believe that this is all that I should need on the client side, right? I use the following code to verify the certificate openssl verify ftp.cer and the error that I get back is error 20 at 0 depth lookup:unable to get local issuer certificate I tried this as well: openssl verify -CAfile ftp.cer ftp.cer but received the same error. From what I understand about SSL, this is happening because I have no chain of trust that connects to this server. By default, openssl did not install any trusted CAs and this is fine. I would just like to tell it to trust this server. I tried various tutorials telling me how to add a certificate authority, including this one here, however the instructions are for linux and include adding a symlink and I am trying to do this in windows. If anyone could provide any guidance on how to do this, or enlighten me if I am not understanding something correctly, I would greatly appreciate it. Thanks!

    Read the article

  • What is Active Directory and how does it work?

    - by MDMarra
    This is a Canonical Question about Active Directory. What is Active Directory? What does it do and how does it work? I find myself explaining some of what I assume is common knowledge about it almost daily. This question will, hopefully, serve as a canonical question and answer for most basic Active Directory questions. If you feel that you can improve the answer to this question, please edit away.

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • User Friendly port knocker (port knocking client) for Windows?

    - by Ekevoo
    It seems "It's me" is the most popular port knocking client for windows… Except… it sucks. It works for console-savvy users such as me, but, unsurprisingly, all my users hate console windows. I know better than to force it upon them. I would love to have a nice port knocker for Windows that would be windowed, have launchers, and be easily provisionable (i.e. I tell my user to paste some settings or import some file by double clicking it). To be honest, just not being console-based would be enough.

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default. The problem is when I execute start-dfs.sh it tries to start hadoop deamons on the salve nodes from the same path as on master. "Whereas the installation path is different. Were the relevant files modified to reflect the installed locations on each node"? Which file I should modify to reflect the installed locations on each node?

    Read the article

  • Apache httpd.conf handle multiple domains to run the same application

    - by John Stewart
    So what we are looking for is the ability to do the following: We have an application that can load certain settings based on the domain that it is being accessed from. So if you come from xyz.com we show a different logo and if you come from abc.com we show a different logo. The code is the same, running from same server just detects the domain on the run Now we want to get a dedicated server (any suggestions?) that will enable us to point all the doamins that we want to this server (we change the DNS for the domains to that of our server) and then when the user goes to a certain domain they run the same application. Now as far as I can understand we will need to create a "VirtualHost" in apache to handle this. Can we create a wildcard virtualhost that catches all the domains? I am not an expert with Apache at all. So please forgive if this comes out to be a silly question. Any detailed help would be great. Thanks

    Read the article

  • Need help creating advanced context menu command in Windows 7 (x64)

    - by Craig
    I found out about ForceBindIP and I really love it, so much that I am using it regularly enough to where spamming the same command prompt over and over again is getting painful. Here's ForceBindIP and what it does: http://www.r1ch.net/stuff/forcebindip/ I'm on a 64-bit of Windows 7 Home Premium. What I want to do is add a right-click context menu item so that when I browse items in Windows Explorer, or on my desktop, I can automate a ForceBindIP command (through the prompt). I am permanently connected to two networks: one over ethernet, and one over wireless. My ethernet network takes priority. What I want to do is add a "Run through wireless network" context menu item, that will send the item through this command: ForceBindIP {5F657824-9E3B-46E5-C21E-F52585R6457E} "[path to right-clicked file here]" It will need to run that command in C:\Windows\SysWOW64. I've no experience at all playing with the Windows registry or writing batch files, anything of that sort. I was wondering if anyone here would be kind enough to assist me?

    Read the article

  • In Windows 8, can you use a different default browser for Metro/WinRT apps than for normal desktop apps?

    - by Joel Coehoorn
    I'm playing with the windows 8 consumer preview, and one thing I've noticed is that by default the metro/winRT apps respect my choice of Chrome as my default browser. That's probably a good thing for the default, out of the box behavior for Windows. However, what I'm finding as I play with the preview is that, when I'm using a metro/WinRT/tiled app (and only when I'm using one of these apps) I would prefer internet links opened from within those apps use the metro version of Internet Explorer. This issue isn't so much that I like IE here as it is the experience transitioning between the metro world and the desktop world is jarring. I want to limit the transitions. Perhaps when the metro version of firefox is released I might prefer it instead. The point is that I want a different default browser setting for the WinRT stuff than I do for the legacy desktop stuff. Is this possible?

    Read the article

  • Java file manager won't load in Firefox

    - by Arthur
    I am using Webmin and I can't get the file manager to load in Firefox. It is a simple, Java based file manager. When I try to load it I get the following error: This module requires java to function, but your browser does not support java Internet Explorer works fine and I have yet to try on Chrome. Java is installed and I have the same problem on Windows and Linux. Java seems to work fine with everything else, with the exception of webcams. Any advice on the issue would be appreciated. Edit: I just checked and it doesn't work in Chrome either.

    Read the article

  • Linux: Simulate Serial Connection from Arduino

    - by shanet
    I'm trying to simulate the serial connection from an Arduino into a Processing applet since I don't have an Arduino at the moment. Simply, I'm trying to just send bytes from Bash to a serial connection (on /dev/ttyS0) which the Processing applet will pick up like it would from an Arduino. I tried the answer to this question: How can I send data to the serial port from a Linux shell?, but it's simply not working and I don't know how to go about debugging something like this since I've never played with serial connections before. Any advice? Thanks much.

    Read the article

  • Windows 7 Multi-NIC woes

    - by Eric
    I have Comcast business Internet here. It gives me 5 static IPs. Most of the machines in my house connect to a router like every other household. It has a 192.168.117.x subnet, DHCP Server, etc. and all is well. However, I have a second machine on MY desk that has a life Internet IP. Up until yesterday, this machine was running XP Pro. The primary NIC was manually set to 192.168.117.241 with no gateway, and the secondary NIC was manually set to 173.x.x.171 with a gateway of 173.x.x.174. This worked just fine for years. Yesterday I replaced that XP machine with a brand new Windows 7 x64 box. Again, I configured it the same way. The onboard NIC was given a static 192.168.117.x address with no gateway, and the secondary NIC was given a live Internet IP address with the proper router, etc. 2 Problems. First is that the internal network (192.168.117.x) is listed as a public network because there's no gateway, so that means no homegroup, no file sharing, none of that. And I can't change it from what I'm reading... The second is that the machine reports the "router" ip address as it's address, and not the address that it's supposed to. I'm ready to tear my hair out over this. Any ideas?

    Read the article

  • connect 2.1 stereo speakers to LG LCD-TV (5500 series)

    - by rMaero
    I bought a pair of speakers for my dad's TV, LG 32LE5500. When I installed them, it just sounded worse than the integrated ones and that's where I realized the subwoofer didn't work at all and both speakers make lower volume than the internal ones. The audio output jack says "H/P" (standing for headphones, and a matching symbol) before buying I checked this output with my phone's headphones and it worked so I figured it would work with a set of speakers since it's a standard audio output. I guess it's literally for headphones and not any other kind of sound players. There is only one other audio output and it is the optical-digital, so I can't use that. Not at least with these speakers.. am I screwed? or is there any workaround?

    Read the article

  • Can I set up multiple accounts on DD-WRT? [closed]

    - by Greg Ros
    Possible Duplicate: Can I set up multiple accounts on DD-WRT? I want to set up multiple accounts on DD-WRT (accounts meaning, username-password pairs). Specifically, I want one to be used primarily for remote web management (though there is no reason to restrict the account to such). Is this possible? If so, how do I go about it? I'm running: Router Model TP-Link TL-WR1043ND Firmware Version DD-WRT v24-sp2 (08/07/10) std - build 14896

    Read the article

  • How to jump through an IP ad in firefox?

    - by chris
    I use firefox with a lot of anti-ad extension, like adblock, NoScript, Ghosty etc. I found this IP 220.191.158.69 always in the list of NoScript, I want to ban this IP. I have checked it, searched it. It is from China telecom damn Ad. I guess NoScript can ban this IP already, but When I do not allow that IP, sometimes a webpage can not be loaded at first time, I need to refresh two or more times. I hate it... So I hope there are some way or browser extensions or script can jump through this damn IP.

    Read the article

  • How to use sudo with rcp command to copy files from linux host to HP-UX host?

    - by Justin
    I'm having this issue where when I try to use sudo to rcp some files from a Linux host to an HP-UX host (note that the destination directory requires root access to write to), I get the following error from HP-UX's side: remshd: Login incorrect. I should note that the passwords for the Linux host and the HP-UX host are different. The command doesn't seem to give me a chance to enter the proper HP-UX password and automatically defaults to this error.

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • How to run Android-x86 project's ISO in VirtualBox with ethernet?

    - by Shiki
    I managed to find a way just days ago, but I had to leave my other PC and now I have no clue how to get it working again. Basically you have to get the image, then install it in a VirtualBox guest. Now the problem is ... when you launch your VM, there is no internet connection. No with NAT or Bridged. Tried all the network cards too. Since internet connection is crucial for Android development, I have to get this thing working. (As I said, I managed to fix it once.) I'm using: - The 4.0 RC1 images from Android-x86 - VirtualBox - Eclipse 4.2 Juno with the latest Android ADT - Android SDK v18 - upgraded to 19 via the Package manager. Now I seen a lot of different builds on the net, about different Android builds for VirtualBox. I have checked Buildroid for example, but there is no network connection. I have imported the virtual machine just as the howto said. The extension package is also installed and it's up to date.

    Read the article

  • Files missing after copying from one sdcard to another using Ubuntu 12.04

    - by Abhi Kandoi
    I have two Kingston sdcards 8GB and 32GB. The 8GB for HTC Sensation and 32GB for my Tablet (Chinese). I soon ran out of space on my phone and decided to swap the data between the sdcards. I copied the data of 32GB sdcard to my PC(with Ubuntu 12.04) deleted everything on the 32GB sdcard and copied data from the 8GB sdcard to the other sdcard. At last I copied the data from my PC to the 8GB sdcard(before this I deleted everything that was on it before). Now the problem is that the 8GB sdcard has the data exactly the same, but the 32GB sdcard has some missing data. All my photos(not copied or posted on Facebook) and Songs are lost. Please help as to what I can do to get my data back? I have Android Revolution HD (ICS 4.0.3) on my rooted HTC Sensation (India). I suspect it has something to do with my data loss.

    Read the article

  • Yet again: "This device can perform faster" (Samsung Galaxy Tab 2)

    - by Mike C
    I've been doing a lot of research with no reasonable solution. Please excuse the length of my post. When I plug my Galaxy Tab 2 (7" / Wi-Fi only / Android ICS) into my Windows 7 64-bit machine, I (almost always) get this warning popup that "This device can perform faster." And in fact, transfers onto the Tab in this mode are slow. The two times I've been able to get a high-speed connection, the transfer has occurred at the expected speed. I just don't know what to do to get that high-speed transfer. (The first time I did, it was the first time I connected the Tab; the second time I did, I was fiddling around and unplugging/plugging in again.) That popup is telling me that the device is USB2, but that it thinks I've connected to a USB1 port. In fact, every USB port (there are ten) on this system is USB2. It's an ASUS M3A78-EMH mobo from late 2008. I'm not sure what the chipset is; the CPU is an AMD Athlon 4850e, but I've seen this message reported for non-AMD systems. (Every mobo reference I've seen in reports on this has been for Asus, but of course most reporters aren't reporting that info at all.) The Windows 7 installation is just a couple weeks old (I had a disk crash) but I saw the same warning on the WinXP/64 that was installed previously. In Device Manager, there are two "Standard Enhanced PCI to USB Host Controller" nodes which are the actual high-speed controllers. There are also five "Standard OpenHCD USB Host Controller" nodes, which I have determined are virtual USB1 controllers embedded in the "Enhanced" controllers. (In Device Manager, I'm using View|Devices by Connection.) My high-speed thumb drives, external disks, and iPod all show up as subnodes of the "Enhanced" controllers; the keyboard, mouse, and USB speakers under the "OpenHCD" ones -- and this is true no matter which ports these devices are plugged into. The Tab shows up under an OpenHCD node, unsurprisingly. It appears as a threesome: a top-level "Mobile USB Composite device" with two subs: "Galaxy Tab 2" and "Mobile USB Modem." (I have no idea what the modem device implies or how I might use it, but I don't care about it either: I just want the Tab to reliably connect at high speed.) On the Tab, the USB support has a switch between PTP and MTP, the latter being the default, and the preferred mode for me (as I'm usually hooking it up for music synch). I have tried, however, connecting it as PTP, and it still connects as USB 1. (As PTP, only the "Galaxy Tab 2" device appears -- no Composite, no Modem.) If it's plugged in as MTP and I change the setting to PTP, Windows unloads and reloads the device, and voila: The Tab appears under an "Enhanced" node, but eventually re-loads again to show a exclamation icon on the device; Properties then shows "This device cannot start." Same response if I plug it in as PTP and then change to MTP; in this case, only the Tab itself shows the exclamation, not the other two devices. One thing I have not tried, and really would prefer to avoid, is installing the "beta" chipset driver available on the Asus website, which is dated 2009. Windows tells me it has the most up-to-date drivers for the Tab, and for the chipset, and I'm inclined to believe that. I suspect the problem is with the Samsung drivers, or possibly the hardware. One suggestion I saw elsewhere which might, possibly, pertain is to ensure the USB cable is properly shielded; however, the Tab has one of those misbegotten 30-pin, not-quite-an-iPod connectors; I don't know if I could find a 3rd party one. It seems unlikely that this cable is improperly shielded, tho. (Is there a way to test that?) So, my question is: does anyone know how to get this working as one might reasonably expect it to?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >