I want to do two things:
Set up a virtual machine on Windows 7 to run Ubuntu
Set up a way for the virtual machine to read the windows disk or windows to have read/write access to the virtual machine's disk. My goal is to have a place where both Ubuntu and Windows can read and write.
What software is good for this task? Are their free programs that can run virtual machines?
Also if my machine is running Windows 7 64-bit, can I install Ubuntu 32-bit? Or am I forced to use Ubuntu 64-bit? Or does it not matter?
I have set up a single laptop (I have 10 of these exact models) for a demonstration next week. I want to copy the configuration (Fedora 13 with the software for demonstration) to the 9 other machines. Is this simply a dd operation? I may be thinking that I require more than what is really needed.
Thanks for the assistance!
Tomcat is leaving me with CLOSE_WAIT sockets which ultimately saturate the maximum number of connections.
I've tried many methods in my client and server code to get rid of these to no avail, including closing connections, calling System.gc(), etc.
Now I'm trying to find a way to simply time these out quickly in the OS. I've got conntrack working, but am not sure how to use that to kill these connections. I've also set /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_close_wait to 1, which of course is too low but the connections persist.
Is there a way to kill these zombie sockets?
Running Ubuntu.
I have two CentOS 5 servers running iptables and apf.
I am having issues connecting with ssh from server 1 to server 2.
I can connect from server 1 to a third server and from that third server to both 1 and 2.
In all cases I am using the IP address and not a host name.
I have stopped iptables and apf on all servers and it doesn't seem to change anything.
What is the best way to debug this process?
I generate more than 50GB of cache files on my RHEL server (and typical file size is 200kb so no of files is huge). When I try to delete these files it takes 8-10 hours.
However, the bigger issue is that the system load goes to critical for these 8-10 hours. Is there anyway where I can keep the system load under control during the deletion.
I tried using
nice -n19 rm -rf *
but that doesn't help in system load.
How can I specify a port for the destination ip? When I do scp -p 0000 it still tries to connect on port 22, not the one I'm specifying.
scp svn_backup.tgz [email protected]:/path/to/new/svn/
I have run the command to backup 7 accounts and then i want to quit that command while its running.
How can i quit from command line
I want that it should quit backing up all accounts not just current account and then i have to press again untill all accounts open
Hello All,
I have Fedora Core 10 installed on a PC, occupying the whole disk (no free space). I want to add Ubuntu 10.04 desktop edition.
Does Ubuntu 10.04 allow you to resize existing partitions during the installation process in order to free up some space for the Ubuntu installation? Without losing or trashing existing data, obviously.
Thanks.
Hi,
I have to install jre1.5.0_10 but jre1.6.0_16 is already installed.
How can i remove jre1.6.0_16.
The application i am try to access requires jre1.5.0_10.
Please Help
I have a Fedora server that I want to spawn an interactive python script after the user logs on. This script will ask the user for various types of information for configuring the system or it will search for the previous config file and show them the predefined information. Originally I was going to put this in rc.local or make it run with init.d but that messed up the boot due to how the script is spawned. So I would like this script to run as soon as the user logs in to Gnome. I've searched around quite a bit and found this answer which appears to be exactly what I want, but it isn't working the way I want it to. Below is my entry.
[Desktop Entry]
Name=MyScript
GenericName=Script for initial configuration
Comment=I really want this to work
Exec=/usr/local/bin/myscript.sh
Terminal=true
Type=Application
X-GNOME-Autostart-enabled=true
Whenever I login, nothing happens. So I then did a test to modified "myscript.sh" to just echo some text to a file and it worked fine. So it appears the portion that isn't working is the script popping open a terminal and waiting for the users input. Are there any additional options I need to add to make this work? I can confirm when I run /usr/local/bin/myscript.sh from the CLI it works fine. I have also tried adding "StartupNotify=true" and still no luck.
Edit @John -
I tried moving my Exec= to /usr/local/bin/myscript-test and this is what myscript-test contains.
#!/bin/bash
xterm -e /usr/local/bin/myscript.sh
Yet again, when I just run the myscript-test it works fine. However when I put that in my autostart, nothing happens.
Edit 2 -
I did a few more tests and it did start working but I had to remove Terminal=True before the xterm would pop. Thanks for your help.
I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5.
The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!).
I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution.
I tried changing the owner of my site files to www-data per this post, but that also does not work.
My user is mike, and it still does not work whether the owner of the files is mike or root.
Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.
I need to allow members of the group 'ftpusers' to be able to change permissions on all objects inside a certain directory. I was looking into how to do it but all I have found is how to do it on BSD:
chmod +a "ftpgroup allow writesecurity" /some/dir
I need exactly the same thing but for Debian/GNU.
Thanks
I trying to find the best way to upgrade from PHP 5.1 to 5.3. The CRM software I am running on this server requires this upgrade or else I probably wouldn't even perform it, because it seems like it's going to be perhaps trickier than I hoped it would be. Being still new to the programming world, these routine upgrades are still worrisome to me.
I am running apache 2.2.6 (Fedora), PHP 5.1.6 and MySQL 5.0.27 on this server.
I've got some legacy code which checks that some file is read-only and readable only by it's owner, i.e. permissions set to 0400. I also need to give read-only access to this file to some other user on the system. I'm trying to set extended ACLs, but this changes 'regular' permission bits in a strange way also:
$ ls -l hostkey.pem
-r-------- 1 root root 0 Jun 7 23:34 hostkey.pem
$ setfacl -m user:apache:r hostkey.pem
$ getfacl hostkey.pem
# file: hostkey.pem
# owner: root
# group: root
user::r--
user:apache:r--
group::---
mask::r--
other::---
$ ls -l hostkey.pem
-r--r-----+ 1 root root 0 Jun 7 23:34 hostkey.pem
And after this the legacy code starts complaining that the file is group-readable (while it is actually not!)
Is it possible to set the extended ACLs in such a way that some other user will also have read-only access, while the file will appear to have only 0400 'regular' permissions?
I've tried multiple pieces of DHCP client and software in attempting to setup a solution for allocating a /127 prefix to virtual machines so that each maintains its own layer 2 isolation. Because there would only be one host assigned to each network, a /64 is impractical.
While the prefix size could reasonably be somewhere in the /64-127 range, the crux of the problem has been the same regardless of the software used in configuring: the DHCP call to bring up the interface uses the address advertised by DHCPv6 and inserts two routes: the /127 given by the router advertising packets and a /64 as well.
Any thoughts on why I'm getting the additional route added across dhcp client vendors?
I've started a headless virtualbox instance with this command:
VBoxManage startvm "Ensembl67VirtualMachine" --type headless
Waiting for VM "Ensembl67VirtualMachine" to power on...
VM "Ensembl67VirtualMachine" has been successfully started.
It is set up with Remote Desktop Server Port:5555 with Authentication Method: Null and Extended Features: Allow Multiple Connections and it's now running, but I don't know how to connect to it from the same laptop where it's running. I would like to be able to have it running on a terminal. I tried this but nothing happens:
rdesktop localhost:5555
ERROR: localhost: unable to connect
rdesktop 192.168.1.1:5555
Any ideas?
I am runing ubuntu 9.10 and the latest version of skype but I have problem with webcam. When I trie the webcam with cheese it works fine but when I try to use the web on skype , skype crashes. can any body help me
I recently updated vmware player from 5.0.2 to 6.0.0, and now the "full screen" menu item is always grayed out. It happens with all my VMs, regardless of the guest OS. "Enter Unity" is still available for some reason.
When I edit a VM to start up in full screen it works, but once I leave full screen I can't go back, which is really annoying. I've no clue where to start looking for the cause.
I'm seeing this on Ubuntu 13.04 and OpenSUSE 12.3.
I read here that this should work, but it doesn't:
//usage: wall [file]
root@sys:~> mesg
is y
root@sys:~> wall "who's out there"
wall: can't read who's out there.
If mesg is set to y, what's preventing me from broadcasting a string? Note, I did confirm that the file option works:
root@sys:~> wall test
Broadcast Message from root@sys
(/dev/pts/1) at 15:23 ...
Who's out there?
Teach me knowledge please.
mjb
We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc..
The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP.
There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory.
I'm not sure if this is possible (I've never seen it done.) Any ideas?
We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either.
Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written?
Cheers,
John.
I have been dual booting windows 7 and ubuntu on my laptop since the past 3 years and all was working fine until I started getting logs like
ata1.00: status: { DRDY ERR }
ata1.00: error: { UNC }
ata1.00: configured for UDMA/133
sd 0:0:0:0: [sda] Unhandled sense code
sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 0:0:0:0: [sda] Sense key: Medium Error [Current][discreptor]
I figured it was a hardware problem and ignored it as long as I could until the HD crashed on me. Then I got a brand new HD and put on windows and ubuntu afresh on it but the problem still persists. Any Help?
For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key.
As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them.
Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time.
But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this?
I would appreciate any other tips on handling this situation.
Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.
How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X.
The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times.
Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.
hi,
I have two systems with ip address say 64.103.56.1(A)(Dev name -wlan0) and 64.103.225.18(B),now what i want is , everytime I ping B from my system A, it has to be routed via a router say with address 10.0.0.251(C)(I want this to be my next hop to reach B) , but this router is on a different subnetwork than the two systems.How do I do this?
/* Things I tried:
I used 'route add -host B gw C wlan0', and got an error saying " no such process exist or no such device found".
Tried ping C and traceroute and found the gw addr at my side is some 63.103.236.3(D), so added another entry route add -host C gw D wlan0, I was able to do this without any error!
*/