Search Results

Search found 13375 results on 535 pages for 'agile tools'.

Page 439/535 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • What's hogging my CPU?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. I try killing lots of things, but it continues at 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, that normally only hits 100% CPU when I'm doing something that requires it (like loading 32 Firefox tabs) after which it goes back to a normal idle level. It's not a new install or anything. It shouldn't be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing any process I saw active again, and killing vino-server finally fixed the problem, even though it never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). How did it manage to use 100% CPU while top only showed it as 5% or so? How do I identify the culprit in the future? Looks like I'm not the only one: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Ubuntu 12.04 on VMware Player loses network configuration

    - by d4ryl3
    I've been having this issue for 2 weeks now with my VMware Player-hosted Ubuntu 12.04. I only use it for my LAMP stack. I've had no issues with it before until about 2 weeks ago when it almost always (once per day at least) loses its network configuration. On boot it shows: Waiting for network configuration... Waiting up to 60 more seconds for network configuration... Booting system without full network configuration... Then when I do ifconfig -a it doesn't show an IP Address and couldn't get online. The only resolutions I've found so far was either to reinstall VMware Tools or use the VMware Player installer and choose Repair. This is frustrating to me because even when the issue was resolved after doing either of the steps I mentioned, the IP Address gets changed. Then I'd have to update the Remote Configuration of my IDE (Netbeans) and my database manager. What could possible cause this? Please help. Thank you. Additional details: I'm using a laptop with Windows 7 and connected to the office WiFi, which is unrestricted as far as I know. Thanks again.

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Permission denied when running Rails app in VirtualBox Ubuntu guest with files on Windows host

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared (Samba) from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents. But when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file) - why is that? Are there any major conceptual flaws with this approach? Would I be better off using the VBox "shared folders" function?

    Read the article

  • Distributing a custom command line tool to enterprise servers

    - by Jeremy Baker
    I've been tasked with building a command line tool that we will be providing to our enterprise customers so that they can use the API to upload data to our platform. The API works with standard cURL requests, so I can do most of the basic functionality with simple bash scripting, although I would like to provide something that is solid and really makes it easy for them to use and I don't know what I don't know. It's been a good 8 years since I've really done any serious sysadmin work. Most of the good tools I use these days are written in Ruby or Python and have a standard distribution process (Gems, for example). However, I know rhel and other platforms have their own package managers. Finally, the question: In today's day and age, what language / distribution method should I consider in order to cover the widest range of platforms without having to build completely different versions for each platform? I'd also love any general feedback you have about building similar projects, or links to projects that you think do a good job of this now and have open source code that I could read. Thanks in advance!

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • How do I stop MSYS from transforming my compiler options?

    - by Carl Norum
    Is there a way to stop MSYS/MinGW from transforming what it thinks are paths on my command lines? I have a project that's using nmake & Microsoft Visual Studio 2003 (yeecccch). I have the build system all ported and ready to go for GNU make (and tested with Cygwin). Something weird is happening to my compiler flags when I try to compile in an MSYS environment, though. Here's a simplified example: $ cl /nologo Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.6030 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. /out:nologo.exe C:/msys/1.0/nologo LINK : fatal error LNK1181: cannot open input file 'C:/msys/1.0/nologo.obj' As you can see, MSYS is transforming the /nologo compiler switch into a windows path, and then sending that to the compiler. I really don't want this to happen - in fact I'd be happy if MSYS never transformed any paths - my build system had to take care of all that when I first ported to Cygwin. Is there a way to make that happen? It does work to change the command to $ cl -nologo Which produces the expected results, but this build system is very large and very painful to update. I really don't want to have to go in and change every use of a / for a flag to a -. In particular, there may be tools that don't support the use of the - at all, and then I'll really be stuck. Thanks for any suggestions!

    Read the article

  • No Outbound Internet on Windows Home Server

    - by Kyle B.
    Could someone provide some steps for me to check my internet connection on my Windows Home Server? It seems to have intermittent connectivity issues and I am unsure of how to diagnose the problem because it is a headless (no monitor, no keyboard) machine so the only way to get to the device is via remote desktop (which works fine). When connected to the machine, it doesn't pull up any microsoft.com sites and some other sites it does pull up (i.e. gmail.com) and some it doesn't (stackoverflow.com). To make matters more complicated, it has worked intermittently in the past for reasons unknown. Are there tools I can use to properly diagnose the reason for the connection failure? I can ping 127.0.0.1 just fine, I have internet working on my other router-connected machines, so I'm not sure why this one would fail. Any suggestions would be much appreciated and up-voted :) ** edit - thanks for suggestions guys, I'm going to try these tonight and will update my post. ** edit #2 - I hoping this is a more permanant fix, but I have both changed my port on the router as well as restarted the router at the same time. The internet (for the moment) appears to be working. I will be sure to try everything we have discussed should this problem persist. Thanks, Kyle

    Read the article

  • Windows 32-bit and 64-bit and GPT

    - by MrLane
    I know similar questions have been asked before across several sites, but the answers at least to me have been confusing and conflicting. My understanding has always been that 64-bit Windows will create and use GPT disks just fine, but will not boot from them without a UEFI BIOS. Also my understanding WAS that 32-bit Windows could not use GPT at all and so is always restricted to 2.2TB disks, which was another reason to move to 64-bit on top of the 4GB memory limit. But I have now read that this isn't correct: 32-bit Windows will create and use GPT disks just as 64-bit does. The only resriction is that you can't boot 32-bit Windows even if you DO have a UEFI BIOS? I don't think much of the literature has explained this well. There are several tools floating around for creating virtual disks or 2.2+.8GB partition schemes and such for 32-bit systems. Why when it seems you can use GPT in 32-bit Windows anyway. It also seems that people blame MS for lagging behind with respect to all of this: but it seems the issue is with BIOS manufactures not supporting UEFI rather than MS not supporting GPT... Is my new understanding now correct?

    Read the article

  • How to determine which ports are open/closed on a FIREWALL?

    - by Rahl
    It seems no one has asked this question before (most regard host-based firewalls). Anyone familiar with port scanning tools (e.g. nmap) knows all about SYN scanning, FIN scanning, and the like to determine open ports on a host machine. Question is though, how do you determine the open ports on a firewall itself (disregard whether the host you're trying to connect to behind the firewall has those particular ports open or closed). This is assuming the firewall is blocking your IP connection. Example: We all communicate with serverfault.com through port 80 (web traffic). A scan on a host would reveal port 80 is open. If serverfault.com is behind a firewall and still allows this traffic through, then we can assume the firewall has port 80 open also. Now let's assume the firewall is blocking you (e.g. your IP address is under the deny list or is missing in the allowed list). You know port 80 has to be open (it works for appropriate IP addresses), but when you (the disallowed IP) attempt any scanning, all port scan attempts on the firewall drop the packet (including port 80, which we know to be open). So, how might we accomplish a direct firewall scan to reveal open/closed ports on the firewall itself, while still using the disallowed IP?

    Read the article

  • Tips on setting up a virtual lab for self-learning networking topics

    - by Harry
    I'm trying to self-learn the following topics on Linux (preferably Fedora): Network programming (using sockets API), especially across proxies and firewalls Proxies (of various kinds like transparent, http, socks...), Firewalls (iptables) and 'basic' Linux security SNAT, DNAT Network admininstration power tools: nc, socat (with all its options), ssh, openssl, etc etc. Now, I know that, ideally, it would be best if I had 'enough' number of physical nodes and physical network equipment (routers, switches, etc) for this self-learning exercise. But, obviously, don't have the budget or the physical space, nor want to be wasteful -- especially, when things could perhaps be simulated/emulated in a Linux environment. I have got one personal workstation, which is a single-homed Fedora desktop with 4GB memory, 200+ GB disk, and a 4-core CPU. I may be able to get 3 to 4 additional low-end Fedora workstations. But all of these -- including mine -- will always remain strictly behind our corporate firewall :-( Now, I know I could use VirtualBox-based virtual nodes, but don't know if there are any better alternatives disk- and memory- footprint-wise. Would you be able to give me some tips or suggestions on how to get started setting up this little budget- and space-constrained 'virtual lab' of mine? For example, how would I create virtual routers? Has someone attempted this sort of thing before: namely, creating a virtual network lab behind a corporate firewall for learning/development/testing purposes? I hope my question is not vague or too open-ended. Basically, right now, I don't know how to best leverage the Linux environment and the various 'goodies' it comes with, and buying physical devices only when it is absolutely necessary.

    Read the article

  • Lost partition after restarting

    - by nxhoaf
    I have Window 7 Professional Service pack installed in my Laptop Lenovo Thinkpad t420. After formatting the disk, and install Window 7 (detailed as above), I went to Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition: C (which is 162gb) E (which is 140gb) Is work fine for about 2 days. Today, when I turn on my computer, I'm very suprise that the E partition is disappear. I can surely confirm that I didn't do any stupid thing yesterday. And before I shut down my computer, everything was fine. In general, here is what I did during the last today (from the point that I formatted the disk, and installed Window) Format 300gb hard disk Install window 7 Install eclipse, db2, .... ( I'm a developer) Install some other tools (Open office, Skype...) Install PGP (http://www.symantec.com/encryption) <--- I'm forced to used that due to my company policy Use Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition as described above. It worked quite well for two last days. Until day... Can you please help me to recover my lost partition ? Thank you! For more info, here is my partition info: You can also see the image here

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • Troubleshooting wireless connection problem / site survey?

    - by johnnyb10
    I just started in the IT department of a small company (200 users) and it's clear that one of the main problems that is driving everyone crazy is the spotty nature of the wireless connectivity throughout the office, particularly in certain conference rooms. This is a huge problem because the connection often drops during important presentations to clients. I was hired to help ease the load on the existing IT admin, who has done a great job, but is overloaded with many other tasks to deal with. So I would like to try to help out with this wireless issue. I am looking for advice on the best way to solve this problem--a realistic troubleshooting methodology that does not require me to spend any money. So far, I've experimented with Ekahau Heat Mapper, which is free and helps create a site survey. But I'm not exactly sure what I'm looking for or if there are other programs/tools/methods I should try as well. Any advice would be greatly appreciated. [Some background: The wireless setup consists of an HP ProCurve Mobility MSM (710?) controller that controls 10 access points throughout the building. There are three virtual wireless networks configured on the controller: one seems to be a default that cannot be changed, one is for internal employees and authenticates via Active Directory, and the third is a guest network for visitors. When I use HeatMapper, these show up as three different SSIDs, with different MAC addresses, all on the same channel. At first I thought maybe this would cause interference, but this seems to be the way the controller works;apparently, it automatically configures the channels to avoid interference from the other APs on the network.]

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • Wordpress hacked. Disabled hacked site but bad traffic continues [closed]

    - by tetranz
    Possible Duplicate: My server's been hacked EMERGENCY My Ubuntu 10.04 LTS VPS has been hacked, probably via a WordPress site. I was alerted to it when I noticed the incoming traffic was unusually high. A WordPress site was littered with eval(base64_decode(...)) code in lots of files. My fault, I had some files writeable by www-data which shouldn't have been. I've disabled that site (a2dissite ... and restart Apache). This has reduced it but I am still getting some malware type traffic. My server runs several WordPress and Drupal sites and a home grown PHP site. I have captured traffic with tcpdump and looked at it Wireshark. It's reaching out to the login page of some Joomla sites, trying multiple logins. The traffic stops when I stop Apache. If I a2dissite every site and reload (not restart) Apache the traffic continues. At that point I have no virtual hosts running and no DocumentRoot in my apache2.conf so I don't know how Apache is still running something. I have searched the other sites with grep for likely looking php code with no success. I may have missed it but I haven't found anything suspicious in the Apache logs. I have mod-status running. I haven't really seen anything much there except that someone is still trying to do a POST to the theme page on the disabled WordPress site but they now get a 404. What should I be looking for? Are there any tools or whatever which would give me more info about how Apache is generating that traffic? Thanks

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >