Search Results

Search found 42629 results on 1706 pages for 'dry run'.

Page 325/1706 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Specifying prerequisites for Puppet custom facts?

    - by larsks
    I have written a custom Puppet fact that requires the biosdevname tool to be installed. I'm not sure how to set things up correctly such that this tool will be installed before facter tries to instantiate the custom fact. Facts are loaded early on in the process, so I can't simply put a package { biosdevname: ensure => installed } in the manifest, since by the time Puppet gets this far the custom fact has already failed. I was curious if I could resolve this through Puppet's run stages. I tried: stage { pre: before => Stage[main] } class { biosdevname: stage => pre } And: class biosdevname { package { biosdevname: ensure => installed } } But this doesn't work...Puppet loads facts before entering the pre stage: info: Loading facts in physical_network_config ./physical_network_config.rb:33: command not found: biosdevname -i eth0 info: Applying configuration version '1320248045' notice: /Stage[pre]/Biosdevname/Package[biosdevname]/ensure: created Etc. Is there any way to make this work? EDIT: I should make it clear that I understand, given a suitable package declaration, that the fact will run correctly on subsequent runs. The difficulty here is that this is part of our initial configuration process. We're running Puppet out of kickstart and want the network configuration to be in place before the first reboot. It sounds like the only workable solution is to simply run Puppet twice during the initial system configuration, which will ensure that the necessary packages are in place. Also, for Zoredache: # This produces a fact called physical_network_config that describes # the number of NICs available on the motherboard, on PCI bus 1, and on # PCI bus 2. The fact value is of the form <x>-<y>-<z>, where <x> # is the number of embedded interfaces, <y> is the number of interfaces # on PCI bus 1, and <z> is the number of interfaces on PCI bus 2. em = 0 pci1 = 0 pci2 = 0 Dir['/sys/class/net/*'].each { |file| devname=File.basename(file) biosname=%x[biosdevname -i #{devname}] case when biosname.match('^pci1') pci1 += 1 when biosname.match('^pci2') pci2 += 1 when biosname.match('^em[0-9]') em += 1 end } Facter.add(:physical_network_config) do setcode do "#{em}-#{pci1}-#{pci2}" end end

    Read the article

  • AWS EC2 - How to specify an IAM role for an instance being launched via awscli

    - by Skaperen
    I am using the "aws ec2 run-instances" command (from the awscli package) to launch an instance in AWS EC2. I want to set an IAM role on the instance I am launching. The IAM role is configured and I can use it successfully when launching an instance from the AWS web UI. But when I try to do this using that command, and the "--iam-instance-profile" option, it failed. Doing "aws ec2 run-instances help" shows Arn= and Name= subfields for the value. When I try to look up the Arn using "aws iam list-instance-profiles" it gives this error message: A client error (AccessDenied) occurred: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/shell/i-15c2766d is not authorized to perform: iam:ListInstanceProfiles on resource: arn:aws:iam::xxxxxxxxxxxx:instance-profile/ (where xxxxxxxxxxxx is my AWS 12-digit account number) I looked up the Arn string via the web UI and used that via "--iam-instance-profile Arn=arn:aws:iam::xxxxxxxxxxxx:instance-profile/shell" on the run-instances command, and that failed with: A client error (UnauthorizedOperation) occurred: You are not authorized to perform this operation. If I leave off the "--iam-instance-profile" option entirely, the instance will launch but it will not have the IAM role setting I need. So the permission seems to have something to do with using "--iam-instance-profile" or accessing IAM data. I repeated several times in case of AWS glitches (they happen sometimes) and no success. I suspected that perhaps there is a restriction that an instance with an IAM role is not allowed to launch an instance with a more powerful IAM role. But in this case, the instance I am doing the command in has the same IAM role that I am trying to use. named "shell" (though I also tried using another one, no luck). Is setting an IAM role not even permitted from an instance (via its IAM role credentials)? Is there some higher IAM role permission needed to use IAM roles, than is needed for just launching a plain instance? Is "--iam-instance-profile" the appropriate way to specify an IAM role? Do I need to use a subset of the Arn string, or format it in some other way? Is it possible to set up an IAM role that can do any IAM role accesses (maybe a "Super Root IAM" ... making up this name)? FYI, everything involves Linux running on the instances. Also, I am running all this from an instance because I could not get these tools installed on my desktop. That and I do not want to put my IAM user credentials on any AWS storage as advised by AWS here. after answered: I did not mention the launching instance permission of "PowerUserAccess" (vs. "AdministratorAccess") because I did not realize additional access was needed at the time the question was asked. I assumed that the IAM role was "information" attached to the launch. But it really is more than that. It is a granting of permission.

    Read the article

  • Vim lint check - only show message if there's an error

    - by GorillaSandwich
    I have this line in my .vimrc, which means "when I save a .rb file, run it through ruby -c" (the ruby interpreter's error checking). autocmd BufWritePost *.rb !ruby -c <afile> When I save that file, I always see output at the bottom of the screen, so I get used to it and start ignoring it. What I want is to only see output if there are errors. I can see that when there are errors, after it says what they are, at the bottom, it says "shell returned 1." How can I modify this line so that it only shows a message if the shell returns 1? Is there a way to conditionally surpress output from a shell command run in vim?

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • django fcgi - call a management command with subprocess.Popen

    - by user41855
    Hi, I'm using an app called django-chronograph. It has a code of line which works in my dev environment and does not work in production: p = subprocess.Popen(['python', get_manage_py(), 'run_job', str(self.pk)]) This line crashes in production with: unknown command run_job Whereas when I run directly from command line: manage.py run_job It works fine. Interestingly it worked once when we exchanged 'python' with 'usr/bin/python'. then we restarted the server once more and it was back to old behaviour. Thus it seems as we have a python path issue. I'm not the guy who is running the server, its my app that should run and it would be great to get some help here. Attention: I'm a total noob regarding server-administration.. server environment: NGINX with FCGI-Daemon FCGI in prefork-mode

    Read the article

  • Easy Transfer from a dead computer

    - by Nathan DeWitt
    I had a computer that electrocuted me and the company sent me a new one. The hard drive from the old computer works fine and is in my new computer. I would like to transfer my files from the old drive to the new one, preferably using Easy Transfer (old & new computers were Win7). When I go through the Easy Transfer wizard, it assumes my old computer is running and that I can run a process to backup all my data to a single file. However, in my case I have the system drive in my new computer and want to pull the data off it. I would like to avoid rebooting the old computer, to avoid damage to myself or my data. I would like to avoid booting into the old system drive, as my new hardware is significantly different and I imagine I'll run into some missing hardware issues. What's the easiest way to get my data off this drive?

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • Running PHP scripts as the owner of the PHP file: security issues

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web user can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Nginx/FPM/PHP all php files say 'File not found.'

    - by Boon
    i just installed nginx 1.1.13 and php 5.4.0 on a centos 5.8 final 64bit machine. Nginx and PHP/Fpm are running, and I can run php scripts via ssh command line, but in the browser I keep getting 'File not found.' errors on all my PHP files. This is how I have my nginx.conf handle PHP scripts: location ~ \.php$ { root /opt/nginx/html; fastcgi_pass unix:/tmp/fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /opt/nginx/html$fastcgi_script_name; include fastcgi_params; } This is a direct copy/paste from my other servers, where it works fine with this setup (but they run older versions of php/fpm). Why am I getting those errors?

    Read the article

  • Finding text in Ubuntu (gnome) terminal output

    - by Rickson
    Imagine this scenario: You run a command at gnome terminal. This command has made a bunch of outputs to the terminal. After some time, you realize you need the value of a variable (let's say variable_needed) that was printed by the command somewhere in the terminal. How to find it? KDE terminal used to have a shortcut ctrl+shift+f which searched the terminal output. It seems that gnome-terminal doesn't have it (at least at Ubuntu 10.04.2 LTS). Is there any way of adding it? Is there any other good terminal I could use that has it? Notice that the output has already been written so I don't want (cannot) run the command again combined with grep, |, , vim, emacs, etc.

    Read the article

  • Public Facing Recursive DNS Servers - iptables rules

    - by David Schwartz
    We run public-facing recursive DNS servers on Linux machines. We've been used for DNS amplification attacks. Are there any recommended iptables rules that would help mitigate these attacks? The obvious solution is just to limit outbound DNS packets to a certain traffic level. But I was hoping to find something a little bit more clever so that an attack just blocks off traffic to the victim IP address. I've searched for advice and suggestions, but they all seem to be "don't run public-facing recursive name servers". Unfortunately, we are backed into a situation where things that are not easy to change will break if we don't do so, and this is due to decisions made more than a decade ago before these attacks were an issue.

    Read the article

  • Adding multiple Exchange accounts under one profile in Outlook 2010 or 2013

    - by Karel Smutný
    I work for two companies, each having own Exchange Server. I want to configure my Outlook for both email accounts. I am not able to add both accounts under the same profile in Outlook 2010, nor 2013. I found some how-to articles, each involving Office Configuration Tool. However, my installation does not support this tool. I have Office 2010 Home and Business, and Office 2013 Preview click-to-run. Is there another way? P.S. Having two different profiles, one for each Exchange account, is inconvenient. I cannot run two instances of Outlook at the same time and switching all the time is tedious.

    Read the article

  • Redirect output of Python program to /dev/null

    - by STM
    I have a Python executable, written and compiled by somebody else, that I simply need to run once halfway down my own bash script. The program uses a text-based UI, therefore waits for input before proceeding, but the key operations it performs when starting are required in my bash script. A messy (and strange) procedure I know, but unfortunately I haven't got any other options. I've gotten around forcefully closing the program with a kill signal, but the program's TUI insists on outputting to wherever it's run. I've tried redirecting both stdout and stderr to /dev/null and running the program in the background by suffixing an ampersand, but simply can't get it to play ball. I believe the cause is the program spawns other processes, and the output redirection of the parent process doesn't affect them. Is there any trick I can utilise to redirect all output from child processes too?

    Read the article

  • Why no multiple instances of Firefox on Linux as on Windows?

    - by Jack
    On Windows If I run Firefox as user jack, and then try to start another instance of firefox I will be unable to, as one is already running. If I choose to run firefox as administrator, then I can have two instances of firefox, separate from each other side by side, because they are under different user accounts. This does not seem to be true on Linux. As user jack if I start firefox, like on windows I am unable to start a new instance. If I open a terminal and change to root, set XAUTHORITY to jacks .Xauthority and try to start firefox as root....I get the error that firefox is already running. Why is this? Please don't spare any technical details in your answers....thankyou.

    Read the article

  • Security issues of running PHP scripts as the owner of the PHP file with suexec

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web server can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • How can I start nginx via upstart ?

    - by Chiggsy
    Background: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04 LTS" I've built nginx, and I'd like to use upstart to start it: nginx upstart script from the site: description "nginx http daemon" start on runlevel 2 stop on runlevel 0 stop on runlevel 1 stop on runlevel 6 console owner exec /usr/sbin/nginx -c /etc/nginx/nginx.conf -g "daemon off;" respawn I get "unknown job" when i try to use initctl to run it, which I just learned apparently means there is an error, ( what's wrong with "Error" to describe errors?) Can someone point me in the right direction ? I've read the documentation , as it is, and it seems kind of sparse for a SysV init replacement... but whatever just need to add this job to the list, run it, and get on with what's left of my life... Any tips? EDIT: initctl version init (upstart 0.6.5)

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • Is there a way to make AutoGK to resize the video?

    - by Jian Lin
    AutoGK is so far a good tool for converting video clips in DVD-R disc into .avi to post to YouTube. But I captured some Wii game play, and it is 16 : 9 ratio, but AutoGK only output the .avi in 4 : 3 ratio. Is there actually a way to tweak it to output the avi as 16 : 9? Otherwise, I need to run VirtualDub and download DivX codec to run it again to resize it 16 : 9. Any way this is done, or any other recommendations? thanks for helping.

    Read the article

  • Outlook 2010 - Export of an Exchange OST to PST creates files with different sizes each time

    - by Jiri Pik
    This is a most weird issue. I have a couple of exchange OST mailboxes, and just for security, I am exporting them using File / Import / Export to a file / Export to PST file. If I run the export consecutively, it always creates files with different file sizes, WITH NO ERROR OR WARNING that something went wrong. The files should be of the same size as you run it right after the previous backup finished. I found out that if the filesize is substantially lower, then a reboot and back up can fix this up. What's your insight into this problem? What could cause that the files have different sizes and what could have caused that there is no warning? I suspected some Windows Search issue as sometimes the backup fails with a dialog error stating that Windows Search terminated the export.

    Read the article

  • JBoss thrown error looking up local address when starting

    - by Jeeba
    Hello im installing a fresh Jboss 5.1 in a Centos 5.5 machine. I dont have Apache installed. So when I try to start jboss using the comand ./run.sh I get the following error 15:13:57,414 INFO [JMXKernel] Legacy JMX core initialized 15:14:03,856 ERROR [ServerInfo] Error looking up local address java.net.UnknownHostException: dhcppc1: dhcppc1 at java.net.InetAddress.getLocalHost(InetAddress.java:1354) at org.jboss.system.server.ServerInfo.getHostAddress(ServerInfo.java:364) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) .... After that i can run Jboss only from 127.0.0.1:8080 but using localhost:8080 doesnt work. I think its a centos configuration problem but im a total newbye managing ports and maybe firewalls, so what do you think could be the problem?

    Read the article

  • Start kippo on Ubuntu startup

    - by Cory Gagliardi
    I'm setting up a new Ubuntu 14.04 server and followed these instructions to install kippo (the SSH Honeypot). To run kippo, I do: su kippo ~/kippo/start.sh The contents of start.sh is simply: #!/bin/sh echo -n "Starting kippo in background..." authbind --deep twistd -y kippo.tac -l log/kippo.log --pidfile kippo.pid Which starts up a background process for kippo. What can I do to make this automatically run on startup? Do I need to add a script that calls this in /etc/init.d?

    Read the article

  • pip install very slow through virtual box

    - by AJP
    pip install --exists-action=w -r requirements.txt is very very slow through virtual box. Any suggests of how to diagnose and fix? Would seeing the VagrantFile be useful? VirtualBox 4.2.12 (can't upgrade to .14 as it doesn't work.) Vagrant 1.0.7 Host machine: ProductName: Mac OS X ProductVersion: 10.7.5 BuildVersion: 11G63b VagrantFile contains: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.customize ["modifyvm", :id, "--memory", 2048] config.vm.box_url = "http://files.vagrantup.com/precise64.box" config.vm.network :hostonly, "33.33.33.21" config.vm.forward_port 5000, 5000 config.vm.forward_port 5555, 5555 config.vm.share_folder "v-root", "/vagrant", "./" Vagrant::Config.run do |config| config.vm.provision :shell, :inline => "VENV=/usr/local/venv bash /vagrant/setup_env.sh" end end Normal download speed is only about 5 times slower at 0.8 Mb per second versus 4 MB per second (as judged by curling a 50 Mb file from S3). But pip install is taking about 20 times longer from Mac (i.e. about 40 minutes) versus 2.

    Read the article

  • sysprep'd Server 2008 R2 VMware autocheck BSODs

    - by slag
    I'm trying to set up a template Server 2k8 R2 (Standard) VM in our VMware ESXi environment, we'll need to deploy a whole lot of them and I want to (a) streamline and (b) standardise the server build process. Originally I built a sysprep answer file using WAIK and the Windows System Image Manager - once I'd run sysprep using this, restarting the machine resulted in a "autochk.exe not found - skipping autocheck" message, followed almost immediately by a BSOD. Rebooting just resulted in the same thing. I first assumed it was my answer file, but have now checked this on a physical machine and it works fine. I've reinstalled the OS on the machine (and NOTHING else), and run sysprep.exe (ticking Generalize) - this produces exactly the same result. I've also attempted to use the Customisation Wizard built in to VMware - again, the same autocheck message and BSOD is the result. Any ideas? Let me know any further information I need to provide...

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >