Search Results

Search found 7238 results on 290 pages for 'step into'.

Page 211/290 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Routing subnet over GRE tunnel

    - by eMgz
    Hi, Im trying to configure a GRE over IPSec connection between two subnets. The IPSec tunnel is opened and now I want to add a GRE tunnel over it: ip tunnel add GRE01 mode gre remote 10.244.0.1 local 10.244.245.32 ttl 255 ip link set GRE01 up ip addr add 10.244.248.126 dev GRE01 ip route add 10.244.248.125 dev GRE01 Now I have an interface GRE01 (ifconfig): GRE10 Link encap:UNSPEC HWaddr <h_addr> inet addr:10.244.248.126 P-t-P:10.244.248.126 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MTU:1476 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the following routes (ip route list): 10.244.248.125 dev GRE10 scope link <pub_subnet> dev eth0 proto kernel scope link src <pub_ip> default via <pub_gw> dev eth0 metric 100 As a last step, I need now to route my subnet over the tunnel: ip route add 10.245.1.224/28 10.244.248.125 However, I am getting the error Error: either "to" is duplicate, or "10.244.248.125" is a garbage. So, what I didn't understand is why I can't route my subnet over the tunnel, once the only route I have there says that it should route the tunnel IP over the GRE01 interface. Any hint? Thanks.

    Read the article

  • Bridging my laptop's wireless and wired adaptors

    - by stacey.richards
    I would like to be able to connect a desktop computer that does not have a wireless adapter to my wireless network. I could just run a network cable from my ADSL/wireless router to the desktop computer but sometimes this is not practical. What I would really like to do is bridge my laptop's wireless and wired adapters in such a way that I can run a network cable from my laptop to a switch and another network cable from the switch to a desktop computer so that the desktop computer can access the Internet through my ADSL/wireless router via my latop: +--------------------+ |ADSL/wireless router| +--------------------+ | +-------------------------+ |laptop's wireless adaptor| | | |laptop's wired adaptor | +-------------------------+ | +------+ |switch| +------+ | +-----------------------+ |desktop's wired adapter| +-----------------------+ A bit of Googling suggests that I can do this by bridging my laptop's wireless and wired adapters. In Windows XP's Network Connections I select both the Local Area Connection and the Wireless Network Connection, right click and select Bridge Connections. From what I gather, this (layer 2?) bridge will examine the MAC address of traffic coming from the wireless network and pass it through to the wired network if it suspects that a network adapter with that MAC address may be on the wired side, and vice-versa. If this is the case, I would assume that when the desktop computer attempts to get an IP address from a DHCP server (which is running on the ADSL/wireless router), it would send a DHCP broadcast packet which would pass through the laptop's bridge to the router and the reply would return through the laptop's bridge back to the desktop. This doesn't happen. With some more Googling I find some instruction how this can be done with Linux. I reboot to Ubuntu 9.10 and type the following: sudo apt-get install bridge-utils sudo brctl addbr br0 sudo brctl addif br0 wlan0 sudo brctl addif br0 eth0 sudo ipconfig wlan0 0.0.0.0 sudo ipconfig eth0 0.0.0.0 Once again, the desktop cannot reach the ADSL/wireless router. I suspect that I'm missing some simple important step. Can anyone shed some light on this for me?

    Read the article

  • How to extend a Linux PV partition online after virtual disk growth

    - by Yves Martin
    VMware allows to extend the size of a virtual disk online - when the VM is running. The next expected steps for Linux system are: extend the partition: delete and create a larger one with fdisk extend the PV size with pvresize use free extents for lvresize operations and then resize2fs for file system But I am stuck on the first step: fdisk and sfdisk still display the old size for the disk. My disk is a SCSI virtual disk connected thanks to the virtual LSI Logic controller. How to refresh the virtual disk size and partition table information available in Linux kernel without reboot ? As far as I know all that steps are possible for a running Windows, without reboot and even without any user actions thanks to VMWare tools. On Linux, I expects to do all steps online too and I already know steps 2, 3 and 4 work online. But the first one - change partition size declared in the partition table (still) seems to require a reboot. Update: My system is a Debian Lenny with kernel 2.6.26 and the disk I have extended is the main disk with a large PV containing the "root" LV for "/".

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • Is timeout in tracertoutput an indication of an error?

    - by nitramk
    TCP/IP packages sent from my computer to a remote server does not always reach destination and ends up being retransmitted sometimes several times before they succeed. To troubleshoot this, I'm running a tracert to the server: Tracing route to <site> [<address>] Over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms mymachine 2 <1 ms <1 ms <1 ms gw.levonline.com [217.70.32.30] 3 <1 ms <1 ms <1 ms 81.201.213.218 4 <1 ms <1 ms <1 ms bmf1-hmf1.driften.net [81.201.213.12] 5 <1 ms <1 ms <1 ms 10ge-2-4-cr2.a1.sth.ownit.se [84.246.88.157] 6 <1 ms * <1 ms netnod-ix-ge-b-sth-4470.microsoft.com [195.69.11.181] 7 26 ms * * ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.1] 8 48 ms 57 ms 56 ms ten9-1.lts-76e-1.ntwk.msn.net [207.46.42.133] 9 * * * Request timed out. In step 6 and 7, I'm seeing timeouts while waiting for the reply from the server (as seen above). Running the same tracert many times gives varying output, sometimes the response is fine, but sometimes I get this timeout 1, 2 and sometimes for all 3 packets. The timeout always starts at the same server, netnod-ix-ge-b-sth-4470.microsoft.com. I've tried setting the tracert timeout to 10 seconds, but am still getting the timeout. Running tracert towards other servers does not give me the same timeout. Microsoft network technicians tells me that the problem is not on "their" side. Are these timeouts an indicator of a lost packet on the specific node which did not respond? Are the timeouts an indication of there being a problem, or is it normal?

    Read the article

  • Hugepages not utilized by MySQL 5.0, CentOS 5

    - by TechZilla
    I've set up Hugepages, but i'm not seeing any of them reserved. Have I missed a step, or for some particular reason, is MySQL is unable to utilize the Hugepages? I have not created a mount of hugetlbfs, although from what I read, MySQL would not call pages in such a manner. If I'm wrong, please let me know, as that would be a trivial solution. Almost all my MySQL tables are using InnoDB. NOTE: I created a hugetlbfs, no change as expected. Is it possible that rebooting would rectify this situation? I would not want to go through the procedure, as this is high availability, but would do so if necessary. This is the configurations, which I believe are relevant. /etc/sysctl.conf ... ## Huge Pages vm.nr_hugepages = 4096 vm.hugetlb_shm_group = 27 ## SHM kernel.shmmax = 34359738368 kernel.shmall = 8589934592 ... /etc/security/limits.conf ... mysql soft nofile 12888 mysql hard nofile 51552 @mysql soft memlock unlimited @mysql hard memlock unlimited /etc/my.cnf [mysqld] large-pages ... grep Huge /proc/meminfo HugePages_Total: 4096 HugePages_Free: 4096 HugePages_Rsvd: 0 Hugepagesize: 2048 kB id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql) context=root:system_r:unconfined_t:SystemLow-SystemHigh tail -6 /var/log/mysqld.log InnoDB: HugeTLB: Warning: Failed to allocate 1342193664 bytes. errno 12 InnoDB HugeTLB: Warning: Using conventional memory pool 120808 15:49:25 InnoDB: Started; log sequence number 0 1729804158 120808 15:49:25 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution I would really appreciate any help, I'm completely out of ideas. If I missed any more relevant configs, or diagnostics, please comment and I'll add it to the question.

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • gcc built by crosstool-ng gives undefined reference

    - by netvope
    I've successfully built a toolchain using crosstool-ng with the default configuration named x86_64-unknown-linux-gnu. The documentation says: Using the toolchain is as simple as adding the toolchain's bin directory in your PATH, such as: export PATH="${PATH}:/your/toolchain/path/bin" and then using the target tuple to tell the build systems to use your toolchain: ./configure --target=your-target-tuple or make CC=your-target-tuple-gcc or make CROSS_COMPILE=your-target-tuple- and so on... I followed the instructions and attempted to build GNU tar (tar-1.25.tar.bz2) with the toolchain. The commands ./configure --target=x86_64-unknown-linux-gnu and make CROSS_COMPILE=x86_64-unknown-linux-gnu- do not work (the build will succeed, but it uses the host system's gcc). The command make CC=x86_64-unknown-linux-gnu-gcc works, but in the very last step when it tries to link, it returns errors like this: compare.o: In function `openat': /dev/shm/x-tools/x86_64-unknown-linux-gnu/x86_64-unknown-linux-gnu/sys-root/usr/include/bits/fcntl2.h:134: undefined reference to `__openat_2' What could be the problem? Was the toolchain not properly setup? Perhaps x86_64-unknown-linux-gnu-gcc is using the header files from the host system but could not find the libraries in the target's sys-root?

    Read the article

  • Adding Netem Filter Rules

    - by fontsix
    iam new in programming and using linux. My Question is, is it possible to add Netem Filter Rules later ? I want to create an PHP-Interface for Netem and I don't know how much filters were required. This should be some kind of dynamically. In Example : A user with a static IP starts an Netem Command (Latency) with PHP Interface this means these five command werde executed by php in the first step $classid = 11; $handle = 10; "sudo tc qdisc add dev eth0 handle 1: root htb"; "sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps"; "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; Now, if there would be a second user who wants to use Netem independent of the first user, i only want to execute the last 3 commands, like "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; There is an Algorithmus for increasing variables $classid and $handle. This should work. Now my Question: Is it possible only to add these 3 commands to add a new class with new qdisc and a new filter rule ? Or how can i realize it ? The Apache Error_log tells me "sh: line 1: flowid: command not found" but i can't find any mistake. I hope you could help Best regards fontsix

    Read the article

  • How to Enable IPtables TRACE Target on Debian Squeeze (6)

    - by bernie
    I am trying to use the TRACE target of IPtables but I can't seem to get any trace information logged. I want to use what is described here: Debugger for Iptables. From the iptables man for TRACE: This target marks packes so that the kernel will log every rule which match the packets as those traverse the tables, chains, rules. (The ipt_LOG or ip6t_LOG module is required for the logging.) The packets are logged with the string prefix: "TRACE: tablename:chain- name:type:rulenum " where type can be "rule" for plain rule, "return" for implicit rule at the end of a user defined chain and "policy" for the policy of the built in chains. It can only be used in the raw table. I use the following rule: iptables -A PREROUTING -t raw -p tcp -j TRACE but nothing is appended either in /var/log/syslog or /var/log/kern.log! Is there another step missing? Am I looking in the wrong place? edit Even though I can't find log entries, the TRACE target seems to be set up correctly since the packet counters get incremented: # iptables -L -v -t raw Chain PREROUTING (policy ACCEPT 193 packets, 63701 bytes) pkts bytes target prot opt in out source destination 193 63701 TRACE tcp -- any any anywhere anywhere Chain OUTPUT (policy ACCEPT 178 packets, 65277 bytes) pkts bytes target prot opt in out source destination edit 2 The rule iptables -A PREROUTING -t raw -p tcp -j LOG does print packet information to /var/log/syslog... Why doesn't TRACE work?

    Read the article

  • Adding a Printer to my Print Server Failing

    - by Rudi Kershaw
    So, on the Windows Server page I read the following. Step 4: Add Network Printers Automatically Print Management (Printmanagement.msc) can automatically detect all the printers that are located on the same subnet as the computer on which you are running Print Management, install the appropriate printer drivers, set up the queues, and share the printers. To automatically add network printers to a printer server Open the Administrative Tools folder, and then double-click Print Management. In the Printer Management tree, right-click the appropriate server, and then click Add Printer. On the Printer Installation page of the Network Printer Installation Wizard, click Search the network for printers, and then click Next. If prompted, specify which driver to install for the printer. So, I have got to this point, made sure the printer (Canon MP620) is on and correctly plugged into the network. However, when I click "Search the network for printers", the wizard doesn't find it. Now, I can't get any further. Is there anything I could be doing wrong? How should I proceed moving forwards?

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • Torrent upload ratio not updated on Synology DS212+

    - by user179271
    I have a Synology DS212+ NAS running DSM 4.2-3211 (current version). I use it for several purposes including torrent download using Download Station and a tracker that needs authentication. My problem is that my download/upload ratio isn't updated, so it constantly falls down. My NAS is behind a router, and I configured the NAT to forward ports 6890 to 6999 to the internal IP address of the NAS. Here are the Download Station settings : TCP port : 6990, Sharing ratio : 900%, Sharing time : infinite, max download speed : 0 (no limit), max upload speed : 0 (no limit), BT protocol encryption : checked, max numbers of peers allowed by torrent file : 4000, DHT : checked, with port 6889. When the DHT option is not checked, the NAS doesn't upload any files. I don't know what is this option for. Can someone help me to solve this problem ? Did I miss any step, or does it come from the NAT ? How is the authentication managed by Dowload Station ? (Sorry for my english) Thanks.

    Read the article

  • Juniper SSG 5 VPN

    - by Ethabelle
    I have a host who set up our Juniper SSG 5 VPN with Firmware version-6.2.0r5.0 I've been trying to set up VPN on it using this guide: http://kb.juniper.net/InfoCenter/index?page=content&id=KB4094 I've followed the steps and on my Mac, whenever I try to connect using L2TP over IPSec I get the following error; Summary of Steps: Create User (give them L2TP auth ability), Create Group, Place User in Group, Create VPN Gateway, Create VPN, create IP Pool, change default L2TP settings, create Untrust Trust Policy. The L2TP-VPN server did not respond. Try reconnecting. If the problem continues, verify your settings and contact your Administrator. I looked in my Firewall's logs, but I don't even see anything under Reports Logs Events. I'm.. obviously missing something, I just don't know what I'm missing at this point. I'm just starting networking and this is sort of Step 101 and I'm getting annoyed and just want to throw up OpenVPN, but I've read that has problems with Juniper Firewalls. Hooray.

    Read the article

  • Local references to old server name remain after Windows 2003 server rename

    - by imagodei
    I have a standalone Win 2003 server with Windows Sharepoint Services (WSS3) running on it. I had to rename the server and I had bunch of problems resulting from this. Note that the server is not in AD environment. Most obvious problems were with Sharepoint, which didn't work. I was somewhat naive to think it will work in the first place, but OK - I've solved this using step 1 & 3 from this site (TNX) Other curious behavior/problems remain. Most disturbing is that Sharepoint isn't able to send email notifications to participants. I noticed there are several references to old server name everywhere I look: in Registry, in Windows Internal Database (MICROSOFT##SSEE). I see instances of old server name in the Sharepoint Central Administration - Operations - Servers in farm. There is reference to a servers: oldname.domain.local oldname.local On one of those servers there is also Windows SharePoint Services Outgoing E-Mail Service (Stopped). Also, when I try to telnet locally to the mail server (Simple Mail Transfer Protocol (SMTP) service), I get a response: 220 oldname.domain.local Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Tue, 15 Jun 2010 13:56:19 +0200 IMO these strange naming problems are also the reason why email notifications from within Sharepoint don't work. Can anyone tell me how to correct/replace those references to oldservername? Why is the email service insisting on old name? Of course I would like to try it without reinstalling the server. TNX!

    Read the article

  • Share Firefox/Thnderbird data between W7 and Linux Mint 12 in dual boot computer

    - by Albert
    I've just set up my laptop (where I had running only W7) with a dual boot to run Linux Mint 12 as well. I have a "Data" partition (apart from the required partitions for W7 and Linux) where I store pretty much everything that isn't software installations (music, videos, project files, etc). I seem to be able to access that NTFS partition totally fine from Mint (like I've always done with W7), which is cool because I can access all that stuff regardless of which OS I'm using. I would like to know if it's possible (and how) to go one step further and share programs data between the two OS. One example would be my Firefox and Thunderbird data. For example, in Firefox share my bookmarks (and if I could share history, autocomplete and all that stuff, that would be awesome). In thunderbird, be able to share my mail and configuration, seeing the same inbox, folders, message rules, etc... So if I receive/send an email from W7 and later switch to Mint, I can see that email as it had been received/sent from Mint, and vice versa. Is this even possible? Or am I asking for too much convenience? If it's possible, any clues on how to set it all up?

    Read the article

  • All commands stopped working in centos 6.5

    - by Michael
    I have made a big mistake while removing some duplicate packages as it appears to be broken. yum 1036 rpm -e --nodeps glibc-2.12-1.132.el6_5.2.x86_64 1037 rpm -e --nodeps nscd-2.12-1.132.el6_5.2.x86_64 1038 rpm -e --nodeps glibc-common-2.12-1.132.el6_5.2.x86_64 1040 rpm -e --nodeps glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6.x86_64 glibc-headers-2.12-1.132.el6.x86_64 1041 rpm -e glibc.x86_64 1042 rpm -e --nodeps glibc.x86_64 The issue happened after doing 1042 step. None of commands work(including yum, rpm, ls, cp etc) and getting error /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory I thought that installing glibc after removing all the current ones would help to resolve the duplicate package error :( Now I realised that it is used as the C library in the GNU system and most systems with the Linux kernel. It defines the "system calls" and other basic facilities such as open, malloc, printf, exit, etc. Is there any possible solutions other than reinstall? I have lost ssh access. Maybe anything can be done using rescue cd? Thanks

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • Clarification for setting up SSH terminal access on Cisco IOS

    - by Matt Malesky
    I'm attempting to set up SSH on a Cisco 2811 and having some difficulties. The first step to this should be running crypto key generate rsa I seem to be missing this though: better#crypto key generate rsa ^ % Invalid input detected at '^' marker. better# Furthermore, the only available commands I have in the crypto key namespace are lock and unlock, which seem to indicate a locked keypair (for which I don't know the password): better#crypto key ? lock Lock a keypair. unlock Unlock a keypair. better#crypto key unlock ? rsa RSA keys better#crypto key unlock rsa %% Please enter the passphrase: %% Unlocking failed. . better# More or less, I'm asking what exactly this might mean, and if I actually do have certificates already here (used router)? Otherwise, how can I solve this? It's my first time configuring this feature, but I definitely believe it's part of my IOS. Speaking of my IOS, I'm running the image c2800nm-advsecurityk9-mz.124-24.T6.bin I'll also note that I have my hostname and ip domain-name configured. I'll also give you a dir flash: below if it's at all of use: better#dir flash: Directory of flash:/ 2 -rw- 2748 Jul 27 2009 14:03:52 +00:00 sdmconfig-2811.cfg 3 -rw- 931840 Jul 27 2009 14:04:10 +00:00 es.tar 4 -rw- 1505280 Jul 27 2009 14:04:32 +00:00 common.tar 5 -rw- 1038 Jul 27 2009 14:04:46 +00:00 home.shtml 6 -rw- 112640 Jul 27 2009 14:05:00 +00:00 home.tar 7 -rw- 1697952 Jul 27 2009 14:05:26 +00:00 securedesktop-ios-3.1.1.45-k9.pkg 8 -rw- 415956 Jul 27 2009 14:05:46 +00:00 sslclient-win-1.1.4.176.pkg 9 -rw- 38732900 Dec 8 2011 06:28:56 +00:00 c2800nm-advsecurityk9-mz.124-24.T6.bin 64016384 bytes total (20598784 bytes free) better#

    Read the article

  • Route through site-to-site VPN not working

    - by Jonathan
    I'm trying to set up a site-to-site VPN using RRAS on two 2K8r2 servers since yesterday. The connection is working at this point, but I can't get it to send traffic from one site to the other one. Set up: the set up is the same on both sites: the server is connected to a router that's connected to a modem. The routers act like a DHCP-server and assign IP addresses from the range subnet.21-subnet-.100. Both servers use a static IP address, subnet.11, and are set up as DMZ. Configuration: the servers are configured using the wizard to set up a site-to-site connection. This works with a demand-dial interface and a PPTP VPN connection. As mentioned, the VPN connection work properly. Problem: I can't get the servers to send the traffic for the other site, to be sent through the VPN connection. I added a static route on both server (home, office 1) and I can see the result in the IP routing table (home, office 1). I did this because the route didn't show up automatically. My guess is that this last step isn't right, for example because the routing table states "non demand-dial", which seems not correct. Home: Subnet: 10.0.1.0/24 Router: 10.0.1.1 Server: 10.0.1.11 (DMZ) DHCP: 10.0.1.21-10.0.1.100 RRAS DHCP: 10.0.1.101-10.0.1.150 Office 1: Subnet: 10.0.2.0/24 Router: 10.0.2.1 Server: 10.0.2.11 (DMZ) DHCP: 10.0.2.21-10.0.2.100 RRAS DHCP: 10.0.2.101-10.0.2.150 I hope someone has an idea to get this route working!

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • issue in installing postgresql 9.3.4 on Windows server 2003 x64

    - by randydom
    Hello i really did all what i know to install the PostgreSQL 9.3.4 on my windows 2003 server x64, but i'm always stopped with this error : please see the error : http://oi57.tinypic.com/s4tb8i.jpg I really don't know what to do , if i click OK then when i go to the windows services list i don't find the PostgreSQL service so i can't Start the service . can any one please help me to install it correctly . PS: i've followed all steps in the : wiki.postgresql.org/wiki/Troubleshooting_Installation many thanks . here's the installer log * where i get " Failed to initialise the database cluster with initdb " : Called IsVistaOrNewer()... 'winmgmts' object initialized... Version:5.2 MajorVersion:5 Ensuring we can write to the data directory (using cacls): Executing batch file 'rad22ADE.bat'... processed dir: C:\Program Files\PostgreSQL\9.2\data Executing batch file 'rad22ADE.bat'... The files belonging to this database system will be owned by user "Administrator". This user must also own the server process. The database cluster will be initialized with locale "English_United States.1252". The default text search configuration will be set to "english". fixing permissions on existing directory C:/Program Files/PostgreSQL/9.2/data ... initdb: could not change permissions of directory "C:/Program Files/PostgreSQL/9.2/data": Permission denied Called Die(Failed to initialise the database cluster with initdb)... Failed to initialise the database cluster with initdb Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.2/installer/server/initcluster.vbs" "NT AUTHORITY\NetworkService" "postgres" "****" "C:\Program Files\PostgreSQL\9.2" "C:\Program Files\PostgreSQL\9.2\data" 5432 "DEFAULT" 0 : Program ended with an error exit code Problem running post-install step. Installation may not complete correctly The database cluster initialisation failed. Creating Uninstaller Creating uninstaller 25% Creating uninstaller 50% Creating uninstaller 75% Creating uninstaller 100% Installation completed Log finished 05/02/2014 at 04:04:04

    Read the article

  • When modern computers boot, what initial setup of RAM do they execute, and how does it exactly work?

    - by user272840
    I know the title reeks of confusion, and some of you might assume I am just wondering about how the computer boots in general, but I'm not. But I'll sort this out for you people now: 1.Onboard firmware is how mostly all modern computer devices work, whether or not with EFI/UEFI(even without "onboard firmware", older computers still employed bank switching, or similar methods with snap-in firmware, cartridges, etc.) 2.On startup there is no "programs" running in the traditional sense yet, i.e. no kernel, OS, user-applications; all of the instructions, especially the very first instruction, is specified by the Instruction Pointer, I am guessing. How is the IP/PC/etc. set to first point to an address for a BIOS/firmware/etc. instruction, and how do the BIOS instructions map themself out in memory prior to startup? 3.Aside from MMIO, BIOS uses certain RAM addresses to have instructions. The big ? comes in when I ask this ... how does BIOS do this? Conclusion: I am assuming that with the very first instruction there is an initial hardware setup for BIOS prior to complete OS bootup. What I want to know is if it's hardware engineered to always work this way, if there's another step in this bootup method I am missing, a gap of information I am unaware of, or how this all works from the very first instruction, and the RAM data itself.

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >