Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 347/604 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Bridging VirtualBox over OpenVPN TAP adapter on Windows

    - by Sean Edwards
    I'm trying to configure a virtual machine (VirtualBox guest running Backtrack 4) with a bridged adapter over a VPN connection. The VPN is is hosted by the cybersecurity club at my university, and connects to a sandboxed LAN designed for penetration testing against various servers that the club has built. My host (Windows 7 Ultimate) connects to the VPN fine and is assigned an IP through DHCP, but for some reason the VM can't do the same thing, and I'm not sure why. It's like OpenVPN is filtering out packets from the MAC address it doesn't recognize. I want the virtual machine to bridge over the VPN connection, because our IT office has very strict policies about what you can and can't do on the network. I want to be able to run active attacks (ARP spoofing, nmap, Nessus scans) in the sandbox environment without risking the traffic accidentally going over the university network and getting my internet access revoked. Bridging over the VPN connection and running all attacks from inside the VM would solve that problem. Any idea why the host can use this interface, but the VM can't?

    Read the article

  • How can I fix puppet refusing to start and asking for "master.pp"?

    - by cwd
    I'm using the very latest version of puppet and have been following the Apress "Pro Puppet" guide step by step. I have installed puppet sudo aptitude install ruby libshadow-ruby1.8 sudo aptitude install puppet puppetmaster facter I have edited /etc/puppet/puppet.conf to include certname [master] certname=puppet.mydomain.com I have edited /etc/hosts and added the following line 127.0.0.1 puppet.mydomain.com puppet I have set the hostname of the server echo "puppet.mydomain.com" > /etc/hostname hostname -F /etc/hostname And then I try and run puppet from the command line. puppet master --verbose --no-daemonize And puppet gives me this error: Could not parse for environment production: Could not find file /master.pp I'm running all commands with sudo and the last line of the error message always says that it can't find master.pp and the path before it is to my current working directory. What am I doing wrong? I should also mention that I don't have a DNS record set up for puppet.mydomain.com - I saw some online documentation mentioning this might be a problem - however I was fairly sure that the hosts file would let me get around that.

    Read the article

  • Could not find rake-10.1.0 in any of the sources

    - by spuder
    I've got a ruby on rails application (gitlab) which is installed via puppet. Everything on the test system runs fine, but production generates an error about rake Running /home/git/gitlab-shell/bin/check Could not find rake-10.1.0 in any of the sources Run bundle install to install missing gems. Here is the full rake check: root@gitlab:/home/git# sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.1 ? ... OK (1.7.1) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... Could not find rake-10.1.0 in any of the sources Run `bundle install` to install missing gems. gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... Spencer Owen / bar ... yes Projects have satellites? ... Spencer Owen / bar ... can't create, repository is empty Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.4) Checking GitLab ... Finished The step 'gitlab-shell check' effectively runs the following command. If I run that command manually, everything passes. root@gitlab:/home/git/gitlab# sudo -u git -H /home/git/gitlab-shell/bin/check Check GitLab API access: OK Check directories and files: /home/git/repositories: OK /home/git/.ssh/authorized_keys: OK I have verified that rake is in fact installed root@gitlab:/home/git/gitlab# gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# bundle install root@gitlab:/home/git/gitlab# sudo -u git -H gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# sudo -u git -H bundle install Ruby is installed with update alternatives root@gitlab:/home/git/gitlab# sudo -u git -H ruby --version ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which ruby` lrwxrwxrwx 1 root root 22 Oct 8 20:26 /usr/bin/ruby -> /etc/alternatives/ruby root@gitlab:/home/git/gitlab# sudo -u git -H gem --version 2.1.10 root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which gem` lrwxrwxrwx 1 root root 21 Oct 10 20:50 /usr/bin/gem -> /etc/alternatives/gem I've tried the solution mentioned below, to allow shared gems http://stackoverflow.com/questions/19284914/bundle-exec-fails-with-could-not-find-rake-10-1-0-in-any-of-the-sources http://stackoverflow.com/questions/18978002/could-not-find-rake-with-bundle-exec root@gitlab:/home/git/gitlab# cat /home/git/gitlab/.bundle/config --- BUNDLE_FROZEN: '1' BUNDLE_PATH: vendor/bundle BUNDLE_WITHOUT: development:test:postgres BUNDLE_DISABLE_SHARED_GEMS: '1' I've exhausted google, so I'm hoping for someone more familiar with ruby to offer any ideas how to resolve the error. Could not find rake-10.1.0 in any of the sources

    Read the article

  • Can a company use VPN to spy on me?

    - by orokusaki
    I'm about to work with a company on a development project, but they first need to set up a pretty complicated environment, and suggested they use VPN to work on my machine to do this. Should I be concerned that somebody can just watch me work? It would be embarrassing, if somebody could witness my work habits (e.g. Asking questions on SO and researching all day is part of my daily work regiment, and makes me feel like a noob, but it keeps me sharp. I also listen to conspiracy videos all day, and RadioLab podcasts, :). Is VPN going to introduce this possibility, and if so, is there a way around it? EDIT: Also, is there a way I can always tell when somebody is VPNed into my computer?

    Read the article

  • Why is VMware vSphere Client hanging at the console?

    - by David
    When I use the vSphere Client (against an ESXi 4.1 server) it hangs a lot. It's always at the same place - using the console on a VM. The console stops responding - and a lot of times, the management window stops responding as well. The operating system won't close the window (as the application isn't responding) and requires a force close. I've had the console refuse to give up the mouse, and have had the console freeze with the cursor in the top left (before any VM software starts, maybe?). The environment is this: the system is a Dell laptop running Ubuntu 10.10 (Maverick) fully updated with VirtualBox OSE running Windows XP (also fully updated). The vSphere Client is running in the Windows XP VM. Got all that? How do I keep these hangs from happening? It is very aggravating. I'm still trying just to get the VM loaded with Ubuntu Server...

    Read the article

  • kinit gives me a Kerberos ticket, but no AFS token

    - by Tomas Lycken
    I'm trying to setup access to my university's IT environment from my laptop running Ubuntu 12.04, by (mostly) following the IT-department's guides on AFS and Kerberos. I can get AFS working well enough so that I can navigate to my home folder (located in the nada.kth.se cell of AFS), and I can get Kerberos working well enough to forward tickets and authenticate me when I connect with ssh. However, I don't seem to get any AFS tokens locally, on my machine, so I can't just go to /afs/nada.kth.se/.../folder/file.txt on my machine and edit it. I can't even stand in /afs/nada.kth.se/.../folder and run ls without getting Permission denied errors. Why doesn't kinit -f [email protected] give me an AFS token? What do I need to do to get one?

    Read the article

  • How to integrate Windows Server 2008 R2's NPS with Cisco switches?

    - by Massimo
    I need to evaluate in a lab environment the use of Windows Server 2008 R2's NPS for 802.1x authentication with Cisco Catalyst 3750 switches; the general idea is to only let clients connect to the company network if they can provide valid domain logon credentials, placing them in a restricted VLAN instead if they can't. NAP would also be a bonus, but it can be evaluated later; the main point now is only 802.1x authentication. Although I have very good knowledge of Windows and Active Directory (on the Microsoft side) and quite good knowledge of Catalyst switches (on the Cisco side), I'm totally new to 802.1x; I'd really like some general guidelines and help here, and some sort of implementation guide would also be very useful.

    Read the article

  • basic picture editing software for Mac OS X

    - by George2
    I am using a MacBook Pro running Mac OS X 10.5. I am new to this development environment, and previously worked on Windows. I want to do very basic image editing, like cut a portion of an image, add some text label to the image, circle some part of the image, etc. I am downloading acornfree from this Url, but when executing acornfree, it said the Mac OS I am using is not supported. Any ideas how to fix this issue or any other free image editing software to recommend (and could be used for my specific Mac OS platform)? http://flyingmeat.com/acorn/acornfree.html Thanks in advance

    Read the article

  • Command line import of database using latin1 encoding

    - by chrisjlee
    I'm using a particular cloud hosting solution (one which i won't name) and they don't provide ssh access so i'm at a whim on how the database is dumped. I downloaded the dump which is packed into a tar.gz file. I discover that this file utilizes latin1 encoding. Which i don't get to specify the encoding for the host i'm using because i don't have SSH access or DB access. I try to import it via command line for my local development environment (mysql -uroot foodb < file.db) like i usually do with other databases but am having problems. Is it possible to import a database via command line by specifying which encoding (preferably latin1) before importing it? Or do i have to convert it to UTF8?

    Read the article

  • What's the best way to mitigate NFS and sudo?

    - by user225874
    Quick background: We have 40 workstations running Linux. NFS is used extensively for bulk data storage and home directories. This allows users to roam freely will relatively transparent file systems. This is an educational environment where postdocs and students have successfully pulled off a coup of sorts. All have gained root on their individual workstations by grooming a technophobic PI who thinks IT people are evil. If I so much as suggest chroot or sudo restrictions, I'll find myself working out of a broom closet. With that in mind, what's the best way to mitigate something like this below? $ hostname workstation1 $ whoami john $ sudo su jane $ whoami jane $ cp -R /home/nfs/jane /mnt/thumbdrive/

    Read the article

  • Public folder emails not being delivered

    - by Rob
    Hello, We have just introduced an Exchange 2010 installation into our existing Exchange 2003 (all standard) environment. We make a lot of use of our Public Folders in 2003, so I am wanting to make a small PF tree in the 2010 system to test some applications against. I have created a few public folders in the 2010 public folder management tool, and mail enabled them, gotten email addresses, etc. However, mail will not be delivered, it queues on my existing 2003 Exchange server's 'Local Delivery' queue, and eventually times out and bounces. I guess the Exchange 'system' including the new 2010 server thinks that all public folder email must need to be delivered to the old 2003 server. Is it possible for me to have two public folder databases that each receive mail? If so, is there something I am missing to enable this? Thanks -R

    Read the article

  • I want to install an MSI twice

    - by don.vince
    I have a peculiar wish to install an msi twice on a machine. The purpose of the double install is to first install under the pre-production folder, run the deployment in a safe environment prior to deploying in the production folder. We typically use separate machines to represent these different environments however in this case I need to use the same box. The two scenarios I get are as follows: I've installed pre-production, I'm happy, I want to install production, I run the msi, it asks whether I want to repair or remove the installation I've production installed, I want to install the new version of the msi, it tells me I already have a version of the product installed and I must first un-install the current version The first scenario isn't too bad as we can at that point sensibly un-install and re-install under the production folder, but the second scenario is a pain as we don't want to un-install the live production deployment. Is there a setting I can give to msiexec that will allow this? Is there a more suitable different approach I could use?

    Read the article

  • Remote Desktop app can't connect through VPN or through RDP load balancer

    - by nhinkle
    Using the regular Remote Desktop Client (in the desktop environment) I can connect just fine to remote servers when connected through Cisco VPN or when accessing a server behind a load balancer. When using the Remote Desktop app in the Modern UI, I can't do either of these things. Trying to connect to a remote server that's on a private network fails with: Can't find server, make sure the name and domain are correct and try again And connecting to a server that's behind an RDP load balancer fails with the following error, after accepting credentials: Because of a protocol error, this session will be disconnected. Please try connecting to the remote PC again Is there some way to use the Remote Desktop app in these situations, or am I just out of luck?

    Read the article

  • In *nix, how to determine which filesystem a particular file is on?

    - by smokris
    In a generic, modern unix environment (say, GNU/Linux, GNU/Solaris, or Mac OS X), is there a good way to determine which mountpoint and filesystem-type a particular absolute file path is on? I suppose I could execute the mount command and manually parse the output of that and string-compare it with my file path, but before I do that I'm wondering if there's a more elegant way. I'm developing a BASH script that makes use of extended attributes, and want to make it Do The Right Thing (to the small extent that it is possible) for a variety of filesystems and host environments.

    Read the article

  • How to upgrade PHP to 5.3 on latest MAMP

    - by Ken
    Hi all, I've been struggling with this for quit a while... Googling all sorts of things has brought up anything useful so far. I have MAMP 1.8.4 install on my MBP running snow leopard - i want to upgrade to PHP to 5.3 to fit to my new job's working environment.. however i can't seem to get it to work. I've tried downloading the 5.3 source and compiling it using's MAMP's ./configure statement but i always get an error regarding apxs and a possibly missing config_vars.mk file from that i understand. Has anyone been able to do this successfully? If so how? What were to happen if i were to drop the --with-apxs from the configure line? would it break the apache/php ? Thanks in advance for any help.

    Read the article

  • Need IPSec help on Windows 2003

    - by user37456
    Hey guys, I am trying to configure IPSec between a web and app server in our environment. I want all traffic between these two servers to use IPsec and be encrypted. These servers are on the same domain so i am currently using Kerebos for security, I have also tried pre-defined keys and nothing changed. When I try and ping between the servers I get "Negotiating IP Security" everytime. I have also confirmed that when I change "Require Security" to "Permit" everything works so IPSec is working, I believe its something with my security setup. Under the security tab both servers have the default 3DES keys first and then DES keys. I have also specified tunnel endpoints (the alternate server's IP). What am I missing? Thanks for any help..

    Read the article

  • Exchange 2010 domainprep messing up mailbox permissions on existing Exchange 2003 server

    - by tearman
    So our environment is basically we have an Exchange 2003 server, and we're attempting to move to Exchange 2010 gradually, and move to new hardware while we're at it. So our first step was obviously to get Exchange 2010 installed on the new box. However, after running the domainprep steps listed in http://technet.microsoft.com/en-us/library/bb125224.aspx (including PrepareLegacyExchangePermissions) our mailbox permissions get messed up. Normally, we have an AD security group for Exchange Administrators that allows anyone in that group to view all folders inside any user's mailbox. However, now, this functionality is gone and our Exchange Admins can't access anyone's mailboxes. We'd like to get this functionality back if we could. Thanks

    Read the article

  • Debugging problems in Visual Studio 2005 - No source code available for the current location

    - by espais
    Hi all I've searched up and down Google for others with a similar problem, and while I can find the error I don't think that other people have the same base problem that I do. Basically, I had to create a project for a unit-testing environment in order to run this test suite. First, I add my original C file, compile, and then a test file (C++) is generated. I then exclude my original source from the project, include this test script (which includes the original source at the top), and then run. I can debug the test file fine, but when it jumps to the original C file I get the dreaded 'no source code available for the current location' error. Both files are located within the same location, and I compiled the original file without any issue. Anybody have any thoughts about this? Its driving me crazy!

    Read the article

  • dhclient configures /etc/resolv.conf with invalid entry

    - by kubal5003
    I'm trying to figure out why running dhclient on my interface sets /etc/resolv conf to the ip number of my gateway(router). This entry is invalid and each and every time causes inability to resolve any address. I would like to: stop dhclient from overwriting the /etc/resolv.conf or make dhclient write there the valid dns ip from my router More on the environment: I'm using virtual Debian Wheezy as a client system on Windows Seven x64. It is run by Virtualbox with networking mode set to bridged (all packets from debian are injected to my network interface on windows). If I manually configure the /etc/resolv.conf then everything works fine. Doing this on every boot is quite annoying.. PS I know I can write a script to do it for me, but this is not the solution I want. //edit router ip: 192.168.1.100 /etc/resolv.conf AFTER running dhclient eth0: "nameserver 192.168.1.100" what I would like the /etc/resolv.conf to look like: "nameserver 89.202.xxxx" (I don't have to provide the real ip do I? )

    Read the article

  • Can't make updates with LDAP from Linux box to Windows AD

    - by amburnside
    I have a webapp (built using Zend Framework - PHP) that runs on a Linux environment which needs to authenticate against Active Directory on a Windows server. So far my webapp can authenticate with LDAPS, but cannot perform any kind of write operation (add/update/delete). It can only read. I have configured my server as follows: I have exported the CA Certificate from my Windows AD server to /etc/opendldap/certs I have created a pem file based on this certificate using openssl I have update /etc/openldap/ldap.conf so that it knows where to look for the pem certificate: TLS_CACERT /etc/openldap/certs/xyz.internal.pem When I run my script, I get the following error: 0x35 (Server is unwilling to perform; 0000209A: SvcErr: DSID-031A1021, problem 5003 (WILL_NOT_PERFORM), data 0 ): Have I missed something with my configuration, which is causing the server to reject making updates to AD?

    Read the article

  • Using mod_wsgi with mpm_itk: socket permission issue

    - by djechelon
    I'm using mod_itk as MPM for increased security in shared environment. I also have a Firefox Sync Server within one of the VHosts I host. That vhost is restricted to a certain user via AssignUserId user group. The problem is that the socket /var/run/wsgi...whatever.sock is chmodded srwx------ and owned by Apache's wwwrun. While I configured the vhost with WSGIProcessGroup sync WSGIDaemonProcess sync user=djechelon group=djechelon processes=1 threads=5 I still get the error that Apache wants to access a socket that is not accessible and because of this gets an error. Is it possible to configure mod_wsgi in order to create different sockets with different owners for different applications or to chmod its socket in a different way (less secure)? Currently, I'm running Firefox Sync as the only WSGI application. Moving it to a vhost that doesn't AssignUserId could solve this problem but will force me to change URL (and buy an additional SSL certificate), so I wouldn't consider this

    Read the article

  • What are the strategies available to minimise badblocks on an encrypted partition?

    - by David Andreoletti
    Let me explain my backup strategy and the problem I am facing. My current backup strategy: Open encrypted container and execute Carbon Copy Cleaner on it at least once a week. Rotate backup disks. Problem: I have an Truecrypt partition on my 1st external hard disk. I recently found out that some files on this encrypted partition cannot be read due to bad blocks (reported by Antonio Diaz's GNU 'ddrescue'). My backup strategy is ineffective in this scenario because bad blocks are discovered during backup. Possible strategy Strategy #0: Have the encrypted partition over a RAID 1 with 2 disks. Is this a suitable strategy ? Strategy #1: Do you think of any other one ? Environment: Mac OS X 10.8 External 2.5" hard disk (SATA) No RAID

    Read the article

  • Does kern.hz still have any relevance in FreeBSD if "dynamic tick mode" is enabled?

    - by Frerich Raabe
    I'm running a FreeBSD 9.0 setup as a virtual machine in a KVM setup. In previous versions of FreeBSD it was common to force the kern.hz setting to a lower value so that the virtual machine does not keep the host busy because it's handling timer interrupts without having any work to do - the FreeBSD Handbook explains: The most important step is to reduce the kern.hz tunable to reduce the CPU utilization of FreeBSD under the Parallels environment. This is accomplished by adding the following line to /boot/loader.conf: kern.hz=100 Without this setting, an idle FreeBSD Parallels guest OS will use roughly 15% of the CPU of a single processor iMac®. After this change the usage will be closer to a mere 5%. However, in FreeBSD 9, the "dynamic tick mode" (aka "tickless mode") is the default, controlled by the kern.eventtimer.periodic setting which defaults to 0 (read: tickless mode). This makes me wonder - does the tip of lowering kern.hz still have any relevance for making FreeBSD 9 play nicely in a virtual machine setup?

    Read the article

  • Sharepoint 2010 Restore .bak Error

    - by Quiel
    I'm having issues with restoring my .bak files to my own sharepoint environment due to version difference. My version is currently: Sharepoint Server with Enterprise Client Access License Ms sharepoint Foundation 2010 core : 14.0.4763.1000 Security Update of Ms sharepoint Foundation 2010(kb2494001) 14.0.6106.5008 Server Version(Where I'm creating my Backups): SharePoint Server with Enterprise Client Access License Microsoft SharePoint Foundation 2010 Core : 14.0.4763.1000 Security Update for Office SharePoint Foundation 2010 (KB2345322) : 14.0.5123.5002 I would like to know if there is a way I could enable my backups from the Server to be compatible with my version. My only option is to downgrade my version since I'm not an admin to the server. Does my Client Access License an issue? Can I just downgrade my security update to match the one in the server? If there is would you please be kind enough to tell me how. ?

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >