Search Results

Search found 14546 results on 582 pages for 'mod authz host'.

Page 160/582 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • SSH connection times out

    - by mark
    Given: vm - a WinXPsp3 virtual machine hosted by a Win7sp1 physical machine alice is the user on vm srv - a Win2008R2sp1 server bob is the user on srv quake - a linux server mark is the user on quake Both vm and srv have the same new installation of cygwin (1.7.9) and openssh. Firewall service is disabled on vm (and its host) and on srv All the machines can be pinged from all the machines. ssh mark@quake works OK from both vm and srv. ssh bob@srv works OK from both quake and vm. ssh alice@vm works on the vm itself only, but it fails on the other two machines: alice@vm ~ $ ssh alice@vm alice@vm's password: Last login: Tue Oct 25 23:42:09 2011 from vm.shunra.net [mark@Quake ~]$ ssh -vvv alice@vm OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to vm [172.30.2.60] port 22. debug1: connect to address 172.30.2.60 port 22: Connection timed out ssh: connect to host vm port 22: Connection timed out bob@Srv ~ $ ssh -vvv alice@vm OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to vm [172.30.2.60] port 22. debug1: connect to address 172.30.2.60 port 22: Connection timed out ssh: connect to host vm port 22: Connection timed out I used ssh-host-config both on vm and srv to configure the ssh to run as a windows service. Besides that I did nothing else. Can anyone help me troubleshoot this issue? Thank you very much. EDIT The virtual machine software is VMWare Workstation 7.1.4. I think the problem is in its settings, but I have no idea where exactly. The Network Adapter is set to Bridged. EDIT2 All the machines are located in the company lab, I think all of them are on the same segment, but I may be wrong. Below is the ipconfig /all output for each machine (skipping the linux server). I have deleted the Tunnel adapters to keep the output minimal. If anyone thinks they matter, do tell so and I will post them as well. In addition ping output is given to show that DNS is correct. Something else, may be relevant, may be not. Doing psexec to srv works OK, whereas to vm failes with Access Denied. srv: C:\Windows\system32>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : srv Primary Dns Suffix . . . . . . . : shunra.net Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : shunra.net Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) Physical Address. . . . . . . . . : E4-1F-13-6D-F3-00 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 172.30.6.9(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.248.0 Default Gateway . . . . . . . . . : 172.30.0.254 DNS Servers . . . . . . . . . . . : 172.30.1.1 172.30.1.2 NetBIOS over Tcpip. . . . . . . . : Enabled C:\Windows\system32>ping vm Pinging vm.shunra.net [172.30.2.60] with 32 bytes of data: Reply from 172.30.2.60: bytes=32 time=1ms TTL=128 Reply from 172.30.2.60: bytes=32 time=4ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.2.60: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 4ms, Average = 1ms C:\Windows\system32> vm: C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : vm Primary Dns Suffix . . . . . . . : shunra.net Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : shunra.net shunranet Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : shunranet Description . . . . . . . . . . . : VMware Accelerated AMD PCNet Adapter Physical Address. . . . . . . . . : 00-0C-29-8F-A0-0B Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 172.30.2.60 Subnet Mask . . . . . . . . . . . : 255.255.248.0 Default Gateway . . . . . . . . . : 172.30.0.254 DHCP Server . . . . . . . . . . . : 172.30.1.1 DNS Servers . . . . . . . . . . . : 172.30.1.1 172.30.1.2 Lease Obtained. . . . . . . . . . : Tuesday, October 25, 2011 18:16:34 Lease Expires . . . . . . . . . . : Wednesday, November 02, 2011 18:16:34 C:\>ping srv Pinging srv.shunra.net [172.30.6.9] with 32 bytes of data: Reply from 172.30.6.9: bytes=32 time=1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.6.9: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\> vm-host (the host machine of the vm): C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : vm-host Primary Dns Suffix . . . . . . . : shunra.net Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : shunra.net Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek RTL8168D/8111D Family PCI-E Gigabit Ethernet NIC (NDIS 6.20) Physical Address. . . . . . . . . : 6C-F0-49-E7-E9-30 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::f59d:7f6e:1510:6f%10(Preferred) IPv4 Address. . . . . . . . . . . : 172.30.6.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.248.0 Default Gateway . . . . . . . . . : 172.30.0.254 DHCPv6 IAID . . . . . . . . . . . : 242020425 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-13-CC-39-80-6C-F0-49-E7-E9-30 DNS Servers . . . . . . . . . . . : 172.30.1.1 194.90.1.5 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet1 Physical Address. . . . . . . . . : 00-50-56-C0-00-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::cd92:38c0:9a6d:c008%16(Preferred) Autoconfiguration IPv4 Address. . : 169.254.192.8(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 352342102 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-13-CC-39-80-6C-F0-49-E7-E9-30 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet8 Physical Address. . . . . . . . . : 00-50-56-C0-00-08 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::edb9:b78c:a504:593b%17(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.5.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 369119318 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-13-CC-39-80-6C-F0-49-E7-E9-30 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled C:\>ping srv Pinging srv.shunra.net [172.30.6.9] with 32 bytes of data: Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.6.9: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping vm Pinging vm.shunra.net [172.30.2.60] with 32 bytes of data: Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.2.60: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\> EDIT3 I have just checked - the vm-host is able to ssh to the vm machine! I still do not know how to leverage this discovery to solve the problem.

    Read the article

  • Update php 5.2.0 to 5.2.4 with aptitude

    - by Kiva
    Hi guy, I would like to update my php 5 in my server. At this moment, I use php 5.2.0 so I want to update it to php 5.2.4 (not php 5.3). I tried to do this: aptitude update aptitude upgrade 63 packets were updated but not php which is always in 5.0 How can I update my php please ? Here is the output of commands asked by David in another post: aptitude search php5 p libapache-mod-php5 - server-side, HTML-embedded scripting langu i A libapache2-mod-php5 - server-side, HTML-embedded scripting langu i php5 - server-side, HTML-embedded scripting langu p php5-apache2-mod-bt - PHP bindings for mod_bt p php5-auth-pam - A PHP5 extension for PAM authentication i php5-cgi - server-side, HTML-embedded scripting langu p php5-clamavlib - PHP ClamAV Lib - ClamAV Interface for PHP5 p php5-cli - command-line interpreter for the php5 scri i A php5-common - Common files for packages built from the p i php5-curl - CURL module for php5 p php5-dev - Files for PHP5 module development i A php5-gd - GD module for php5 p php5-idn - PHP api for the IDNA library p php5-imagick - ImageMagick module for php5 p php5-imap - IMAP module for php5 p php5-interbase - interbase/firebird module for php5 p php5-json - JSON serialiser for PHP5 p php5-ldap - LDAP module for php5 p php5-mapscript - module for php5-cgi to use mapserver p php5-maxdb - PHP extension to access MaxDB databases fo i A php5-mcrypt - MCrypt module for php5 p php5-memcache - memcache extension module for PHP5 p php5-mhash - MHASH module for php5 p php5-ming - Ming module for php5 i A php5-mysql - MySQL module for php5 p php5-odbc - ODBC module for php5 p php5-pgsql - PostgreSQL module for php5 p php5-ps - ps module for PHP 5 p php5-pspell - pspell module for php5 p php5-radius - PECL radius module for PHP 5 p php5-recode - recode module for php5 p php5-snmp - SNMP module for php5 p php5-sqlite - SQLite module for php5 p php5-sqlite3 - SQLite3 module for php5 p php5-sqlrelay - SQL Relay PHP API p php5-suhosin - advanced protection module for php5 p php5-sybase - Sybase / MS SQL Server module for php5 p php5-tidy - tidy module for php5 p php5-uuid - OSSP uuid module for php5 p php5-xapian - Xapian search engine interface for PHP5 p php5-xcache - Fast, stable PHP opcode cacher p php5-xmlrpc - XML-RPC module for php5 p php5-xsl - XSL module for php5 aptitude show php5 | grep Version Version : 5.2.0-8+etch13 aptitude show php5-cgi | grep Version Version : 5.2.0-8+etch13 php5 --version -bash: php5: command not found php-cgi --version PHP 5.2.0-8+etch13 (cgi-fcgi) (built: Oct 2 2008 08:21:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • Adding a CLI for PHP5 on live server

    - by Josua Pedersen
    I want to add command-line support for PHP5 on my server. When I run aptitude install php5-cli I get a message saying that my PHP modules/packages have unmet dependencies. Here is a list of packages that suffer from these "unmet dependencies" and needs and upgrade: php5-gd php5-curl php5-mysql php5-cgi They all depend on php5-common. Can I upgrade the packages just like aptitude suggests without causing any disruptions to the live site? Output from aptitude Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initialising package states... Done The following packages are BROKEN: libapache2-mod-php5 php5-cgi php5-curl php5-gd php5-mysql The following NEW packages will be installed: php5-cli The following packages will be upgraded: php5-common 1 packages upgraded, 1 newly installed, 0 to remove and 123 not upgraded. Need to get 3,511kB of archives. After unpacking 7,803kB will be used. The following packages have unmet dependencies: php5-gd: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-curl: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-mysql: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-cgi: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. libapache2-mod-php5: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. The following actions will resolve these dependencies: Upgrade the following packages: libapache2-mod-php5 [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-cgi [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-curl [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-gd [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-mysql [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] Score is 340

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Downgrade form php5 5.3.10 to php5 5.3.2 in ubuntu 12.04

    - by iori
    i wanted to install php5 5.3.2, so first deleted all the php5 files sudo apt-get purge php5 php5-cli php5-common php5-mysql and also delete the deb files form /var/cache/apt/archives so now there is no deb file on the system then i add this person repository sudo apt-add-repository ppa:sushkov/personal because he add php5.3.2 and then i updated it and upgraded it sudo apt-get update && sudo apt-get upgrade then i installed php5 sudo apt-get install php5 php5-cli php5-common php5-mysql now when i check the php version it says php5.3.10 and when i run this command sudo apt-cache show php5 it says Package: php5 Version: 5.3.15-1~dotdeb.0 Architecture: all Maintainer: Guillaume Plessis <[email protected]> Installed-Size: 0 Depends: libapache2-mod-php5 (>= 5.3.15-1~dotdeb.0) | libapache2-mod-php5filter (>= 5.3.15-1~dotdeb.0) | php5-cgi (>= 5.3.15-1~dotdeb.0) | php5-fpm (>= 5.3.15-1~dotdeb.0), php5-common (>= 5.3.15-1~dotdeb.0) Filename: dists/squeeze/php5/binary-i386/php5_5.3.15-1~dotdeb.0_all.deb now i dont know how to downgrade, is there any way that i change something in the repository and write sudo apt-get install php5 it will install php5.3.2 which i want instead of php5.3.10 Thanks

    Read the article

  • Partitioning recommendations for a Proxmox VM Server (OpenVZ)

    - by luison
    We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements. Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems. Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements. Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that: we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel) we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine. we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine. With this based on a Linux Raid of aprox (750Gb) we are considering something like: ext3_1/ - (20Gb) ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync LVM_1 /var/lib/vz - (390Gb) partition for virtual images LVM_2 /shared_data - (30Gb) LVM_3 /backups - (300Gb) where all backups would be allocated Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations). Does this makes more or less sense? Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)? Are their any other easier ways to mount host system partitions (or folders) to the VMs?

    Read the article

  • Send email from postfix server to outside email client

    - by Russ
    I have set up an email server and can send/receive email localhost and I can receive mail from outside sources but I cannot send emails to outside sources. I get this error when I try to send to an outside source such as live.com or gmail.com: Nov 8 22:15:13 server2 postfix/smtp[7598]: 699D480A64: to=, relay=none, delay=122043, delays=122022/0.01/20/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=live.com type=MX: Host not found, try again) Any ideas where I could look to resolve this?

    Read the article

  • VPN iptables Forwarding: Net-to-net

    - by Mike Holler
    I've tried to look elsewhere on this site but I couldn't find anything matching this problem. Right now I have an ipsec tunnel open between our local network and a remote network. Currently, the local box running Openswan ipsec with the tunnel open can ping the remote ipsec box and any of the other computers in the remote network. When logged into on of the remote computers, I can ping any box in our local network. That's what works, this is what doesn't: I can't ping any of the remote computers via a local machine that is not the ipsec box. Here's a diagram of our network: [local ipsec box] ----------\ \ [arbitrary local computer] --[local gateway/router] -- [internet] -- [remote ipsec box] -- [arbitrary remote computer] The local ipsec box and the arbitrary local computer have no direct contact, instead they communicate through the gateway/router. The router has been set up to forward requests from local computers for the remote subnet to the ipsec box. This works. The problem is the ipsec box doesn't forward anything. Whenever an arbitrary local computer pings something on the remote subnet, this is the response: [user@localhost ~]# ping 172.16.53.12 PING 172.16.53.12 (172.16.53.12) 56(84) bytes of data. From 10.31.14.16 icmp_seq=1 Destination Host Prohibited From 10.31.14.16 icmp_seq=2 Destination Host Prohibited From 10.31.14.16 icmp_seq=3 Destination Host Prohibited Here's the traceroute: [root@localhost ~]# traceroute 172.16.53.12 traceroute to 172.16.53.12 (172.16.53.12), 30 hops max, 60 byte packets 1 router.address.net (10.31.14.1) 0.374 ms 0.566 ms 0.651 ms 2 10.31.14.16 (10.31.14.16) 2.068 ms 2.081 ms 2.100 ms 3 10.31.14.16 (10.31.14.16) 2.132 ms !X 2.272 ms !X 2.312 ms !X That's the IP for our ipsec box it's reaching, but it's not being forwarded. On the IPSec box I have enabled IP Forwarding in /etc/sysctl.conf net.ipv4.ip_forward = 1 And I have tried to set up IPTables to forward: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [759:71213] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 25 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 4500 -j ACCEPT -A INPUT -m policy --dir in --pol ipsec -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -s 10.31.14.0/24 -d 172.16.53.0/24 -j ACCEPT -A FORWARD -m policy --dir in --pol ipsec -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Am I missing a rule in IPTables? Is there something I forgot? NOTE: All the machines are running CentOS 6.x Edit: Note 2: eth1 is the only network interface on the local ipsec box.

    Read the article

  • iSCSI timeouts under high load

    - by Antonio
    I have two servers connected via Gigabit Ethernet. One is iSCSI target, the second one is initiator. When I run mkfs.ext4 at initiator, after a while disk IO slows down critically. In the target host I can see the following in syslog: Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668c 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668c 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668d 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668d 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668e 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 1196696 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 1196696 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119669e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119669e 6 Sep 14 09:40:04 sh11 tgtd: abort_task_set(1139) found 119669f 0 Sep 14 09:40:04 sh11 tgtd: abort_cmd(1115) found 119669f 6 And load average grows to 12 or even more: # uptime 12:37:00 up 23 days, 13:25, 1 user, load average: 12.00, 7.00, 4.00 CentOS 6.3 tgtd 1.0.24 Intel Pentium 4 2.4GHz 1Gb RAM 2Tb WD Cavlar Green SATA 2.0 #lspci 00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 02) 00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 02) 00:1d.0 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82) 00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV200 QW [Radeon 7500] 02:01.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11) 02:02.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:03.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:04.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (CNR) Ethernet Controller (rev 82) Is there a way to tune target host to avoid these timeouts?

    Read the article

  • PHP Error / Mk-livestatus in Nagvis

    - by tod
    I have Nagios and Nagvis installed via Debian packages, but when I run Nagvis and try to get into the "General Configuration" menu I get this error Error: (0) Array to string conversion (/usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php:126) #0 /usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php(126): nagvisExceptionErrorHandler(8, 'Array to string...', '/usr/share/nagv...', 126, Array) #1 /usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php(44): WuiViewEditMainCfg->getFields() #2 /usr/share/nagvis/share/server/core/classes/CoreModMainCfg.php(56): WuiViewEditMainCfg->parse() #3 /usr/share/nagvis/share/server/core/functions/index.php(120): CoreModMainCfg->handleAction() #4 /usr/share/nagvis/share/server/core/ajax_handler.php(63): require('/usr/share/nagv...') #5 {main} I'm also having an issue with backends in Nagvis. check-mk-livestatus is installed, but I get this error when hovering over items: Problem (backend: live_1): Unable to connect to the /var/lib/nagios3/rw/live in backend live_1: Connection refused Or when trying to add things: Unable to fetch data from backend - falling back to input field. /var/lib/nagios3/rw/ exists, but there is no "live" file. I'm really not sure what is going on, especially since these were all Debian packages... Here is the most relevant part of the nagvis.ini.php: ; ---------------------------- ; Backend definitions ; ---------------------------- ; Example definition of a livestatus backend. ; In this case the backend_id is live_1 ; The path /usr/local/nagios/var/rw has to exist [backend_live_1] backendtype="mklivestatus" ; The status host can be used to prevent annoying timeouts when a backend is not ; reachable. This is only useful in multi backend setups. ; ; It works as follows: The assumption is that there is a "local" backend which ; monitors the host of the "remote" backend. When the remote backend host is ; reported as UP the backend is queried as normal. ; When the remote backend host is reported as "DOWN" or "UNREACHABLE" NagVis won't ; try to connect to the backend anymore until the backend host gets available again. ; ; The statushost needs to be given in the following format: ; "<backend_id>:<hostname>" -> e.g. "live_2:nagios" ;statushost="" socket="unix:/var/lib/nagios3/rw/live" There is nothing relating to 'backends' or 'mklivestatus' in /var/log/nagios3/nagios.log Any help would be much appreciated

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • How to tell if a freebsd jail is up to date?

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • CYGWin and sshd. Accepts authentication, but won't connect

    - by timramich
    Everything I find relating to this is the "ssh-exchange-identification:" error. This doesn't happen for me. I get two lines: Connection to localhost closed by remote host. Connection to localhost closed. ssh -v localhost spits out: OpenSSH_5.8p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to localhost [::1] port 22. debug1: Connection established. debug1: identity file /home/tim/.ssh/id_rsa type -1 debug1: identity file /home/tim/.ssh/id_rsa-cert type -1 debug1: identity file /home/tim/.ssh/id_dsa type -1 debug1: identity file /home/tim/.ssh/id_dsa-cert type -1 debug1: identity file /home/tim/.ssh/id_ecdsa type -1 debug1: identity file /home/tim/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8 debug1: match: OpenSSH_5.8 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 64:e3:27:90:ef:48:93:21:38:ea:9b:0e:0b:07:b0:2a debug1: Host 'localhost' is known and matches the ECDSA host key. debug1: Found key in /home/tim/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /home/tim/.ssh/id_rsa debug1: Trying private key: /home/tim/.ssh/id_dsa debug1: Trying private key: /home/tim/.ssh/id_ecdsa debug1: Next authentication method: keyboard-interactive debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: password tim@localhost's password: debug1: Authentication succeeded (password). Authenticated to localhost ([::1]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: channel 0: free: client-session, nchannels 1 Connection to localhost closed by remote host. Connection to localhost closed. Transferred: sent 2008, received 1376 bytes, in 0.0 seconds Bytes per second: sent 64774.0, received 44387.0 debug1: Exit status -1 I'm really at wit's end here because I couldn't get Windows' remote shell to even work. I'm so sick of using VNC just to get to a shell. Plus Windows' shell sucks because there is nothing like screen. Thanks

    Read the article

  • Proxying webmin with nginx

    - by TheLQ
    I am attempting to proxy webmin behind nginx for various reasons that are outside the scope of this question. However I've been trying for a while now and can't seem to figure it out and think I'm to the point where I've exhausted all the permutations of the config file I can think of. What I have now: relevant nginx config (commented out options removed, I tried many) # Proxy for webmin location /admin/quackwall-webmin { proxy_pass http://127.0.0.1:10000; # Also tried ending with /admin/quackwall-webmin proxy_set_header Host $host; } /etc/webmin/config - Relevant parts webprefix=/admin/quackwall-webmin webprefixnoredir=1 referer=(nginx domain name) Webmin itself is on the standard ports, listening on all addresses temporarily for debugging. SSL has been disabled for right now. So I make a standard request for the login page. However all the CSS and images are broken, with the standard login page returned for all of the resources. In the webmin miniserv logs I see 127.0.0.1 - - [29/Oct/2012:12:29:00 -0400] "GET /admin/quackwall-webmin/session_login.cgi HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/style.css HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/sorttable.js HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/toggleview.js HTTP/1.0" 401 2453 So all the URL's are returning 401s. Interestingly ngrep seems to show that the requests suceeded on the backend communication between nginx and webmin T 127.0.0.1:58908 -> 127.0.0.1:10000 [AP] POST /admin/quackwall-webmin/session_login.cgi HTTP/1.0..Host: (host)..Connection: close..User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW 64; rv:16.0) Gecko/20100101 Firefox/16.0..Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8..Accept-Language: en-US,en;q=0.5. .Accept-Encoding: gzip, deflate..Referer: http://(host)/admin/quackwall-webmin/session_login.cgi..Cookie: testing=1..Cache-Control: ma x-age=0..Content-Type: application/x-www-form-urlencoded..Content-Length: 41....page=%2F&user=(user)&pass=(pass) T 127.0.0.1:10000 -> 127.0.0.1:58908 [AP] HTTP/1.0 200 Document follows.. Various other permutations of these config options and others show similar results, with the URL sent to webmin by nginx either being /admin/quackwall-webmin/session_login.cgi, /admin/quackwall-webmin//session_login.cgi, and just /session_login.cgi. All give 201 Unauthenticated responses. All requests, even those that somewhat succeed (as in I can actually load the resources of the page) Is changing the webprefix in webmin even supported? What am I doing wrong? What else can I try?

    Read the article

  • Using Diskpart in a PowerShell script won't allow script to reuse drive letter

    - by Kyle
    I built a script that mounts (attach) a VHD using Diskpart, cleans out some system files and then unmounts (detach) it. It uses a foreach loop and is suppose to clean multiple VHD using the same drive letter. However, after the 1st VHD it fails. I also noticed that when I try to manually attach a VHD with diskpart, diskpart succeeds, the Disk Manager shows the disk with the correct drive letter, but within the same PoSH instance I can not connect (set-location) to that drive. If I do a manual diskpart when I 1st open PoSH I can attach and detach all I want and I get the drive letter every time. Is there something I need to do to reset diskpart in the script? Here's a snippet of the script I'm using. function Mount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [Parameter(Position=1,Mandatory=$false,ValueFromPipeline=$false)] [string]$DL, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## Diskpart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path" `nAttach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 3 @" Select VDisk File="$Path"`nSelect partition 1 `nAssign Letter="$DL" Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart } end { Remove-Item -Path $DiskpartScript -Force ; "" Write-Host "The VHD ""$Path"" has been successfully mounted." ; "" } } function Dismount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [switch]$Remove, [switch]$NoConfirm, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } function RemoveVHD { switch ($NoConfirm) { $false { ## Prompt for confirmation to delete the VHD file ## "" ; Write-Warning "Are you sure you want to delete the file ""$Path""?" $Prompt = Read-Host "Type ""YES"" to continue or anything else to break" if ($Prompt -ceq 'YES') { Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } else { "" ; Write-Host "Script terminated without deleting the VHD file." ; "" } } $true { ## Confirmation prompt suppressed ## Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } } } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## DiskPart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path"`nDetach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 10 } end { if ($Remove) {RemoveVHD} Remove-Item -Path $DiskpartScript -Force ; "" } }

    Read the article

  • Configuring VLAN's on two HP procurve switches

    - by pan
    Trying to route a new ISP (Microwave link) from one of my out buildings to my computer room and hence my firewall. Old ISP came direct into firewall. In the outbuilding the Microwave modem connects with cat5 to HP Procurve 2524 switch. Because this ISP is coming through my internal network, I plan on using a new vlan called "airspeed" only for this ISP traffic. Up until now I've just been using the Default_vlan on both HP switches (4108 + 2524). So far I've been unable to ping from my laptop to the ISP modem both of which are on the new vlan 2 ("Airspeed"). No traffic needs to cross from vlan 2 to vlan 1 so I've left the ports as untagged. I've used the subnet provide from my ISP as the new vlan 2 subnet. Can anybody see what I'm doing wrong here? I've added the configuration of both switch below. Rough diagram: Microwave modem (Gateway IP 77.75.00.49) | HP 2524 switch (port 24) | HP 2524 switch fibre link | HP 4108GL switch fibre link | HP 4108GL switch (port D1) | Laptop configured with IP 77.75.00.50 (for testing but will be connected to firewall) And my 4108GL config: ; J4865A Configuration Editor; Created on release #G.07.21 hostname "HP ProCurve Switch 4108GL" cdp run module 1 type J4864A module 2 type J4862B module 3 type J4862B module 4 type J4862B ip default-gateway 128.1.146.50 snmp-server community "public" Unrestricted snmp-server host 128.1.146.51 "public" Not-INFO snmp-server host 128.1.146.38 "public" vlan 1 name "DEFAULT_VLAN" untagged A1-A3,B1-B24,C1-C24,D2-D24 ip address 128.1.146.203 255.255.0.0 no untagged D1 exit vlan 2 name "Airspeed" untagged D1 ip address 77.75.00.51 255.255.255.248 exit Finally my 2524 config: ; J4813A Configuration Editor; Created on release #F.04.08 hostname "HP ProCurve Switch 2524" cdp run ip default-gateway 0.0.0.0 snmp-server community "public" Unrestricted snmp-server host 128.1.146.51 "public" Not-INFO snmp-server host 128.1.146.51 "public" snmp-server host 128.1.146.38 "public" vlan 1 name "DEFAULT_VLAN" untagged 1-23,25-26 no untagged 24 ip address 128.1.146.204 255.255.0.0 exit vlan 2 name "Airspeed" untagged 24 ip address 77.75.00.51 255.255.255.248 exit no aaa port-access authenticator active

    Read the article

  • Graphics driver for ubuntu on dell latitude XT

    - by marc.riera
    Hi, we have a laptop (dell latitude xt) on our company, and we would like to install ubuntu on it. windows 7 works fine out of the box, so the hardware is fine. since this laptop has a touchscreen we just installed ubuntu 10.10 netbook edition 32x. But, we do not manage to enable the touchscreen, neither the vga graphic drivers. this is the output from lspci, if somebody cares. 00:00.0 Host bridge: ATI Technologies Inc Radeon Xpress 7930 Host Bridge 00:01.0 PCI bridge: ATI Technologies Inc RS7932 PCI Bridge 00:04.0 PCI bridge: ATI Technologies Inc Device 7934 00:06.0 PCI bridge: ATI Technologies Inc RS7936 PCI Bridge 00:07.0 PCI bridge: ATI Technologies Inc Device 7937 00:13.0 USB Controller: ATI Technologies Inc SB600 USB (OHCI0) 00:13.1 USB Controller: ATI Technologies Inc SB600 USB (OHCI1) 00:13.2 USB Controller: ATI Technologies Inc SB600 USB (OHCI2) 00:13.3 USB Controller: ATI Technologies Inc SB600 USB (OHCI3) 00:13.4 USB Controller: ATI Technologies Inc SB600 USB (OHCI4) 00:13.5 USB Controller: ATI Technologies Inc SB600 USB Controller (EHCI) 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 14) 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB600 PCI to LPC Bridge 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon Xpress 1250 03:01.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 03:01.1 FireWire (IEEE 1394): Texas Instruments PCIxx12 OHCI Compliant IEEE 1394 Host Controller 03:01.3 SD Host controller: Texas Instruments PCIxx12 SDA Standard Compliant SD Host Controller 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express 0b:00.0 Network controller: Broadcom Corporation BCM4321 802.11a/b/g/n (rev 03) I've tryied to install ati drivers 9.3 , which I downloaded and installed, unpacked and installed, builded and installed, but nothing worked. Looks like the latests version is just accepted to work on jaunty 9.04, so they are kind of old. what else I can do? thanks. Marc Information added: lsusb and lspci -n |grep 01:05.0 sysop@wl083517:~$ lspci -n |grep 01:05.0 01:05.0 0300: 1002:7942 sysop@wl083517:~$ lsusb Bus 006 Device 002: ID 413c:8138 Dell Computer Corp. Wireless 5520 Voda I Mobile Broadband (3G HSDPA) Minicard EAP-SIM Port Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 413c:8140 Dell Computer Corp. Wireless 360 Bluetooth Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 0483:2016 SGS Thomson Microelectronics Fingerprint Reader Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 1b96:0001 N-Trig Duosense Transparent Electromagnetic Digitizer Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 03f0:1807 Hewlett-Packard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub sysop@wl083517:~$

    Read the article

  • freebsd-update reports an upgraded jail as not upgraded

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • Working with Visual Studio Web Development Server and IE6 in XP Mode on Windows 7

    - by Igor Milovanovic
    (Brian Reiter from  thoughtful computing has described this setup in this StackOverflow thread. The credit for the idea is entirely his, I have just extended it with some step by step descriptions and added some links and screenhots.)   If you are forced  to still support Internet Explorer 6, you can setup following combination on your machine to make the development for it less painful. A common problem if you are developing on Windows 7 is that you can’t install IE6 on your machine. (Not that you want that anyway). So you will probably end up working locally with IE8 and FF, and test your IE6 compatibility on a separate machine. This can get quite annoying, because you will have to maintain two different development environments, not have all the tools available, etc.   You can help yourself by installing IE6 in a Windows 7 XP Mode, which is basically just an Windows XP running in a virtual machine.   [1] Windows XP Mode installation   After you have installed and configured your XP mode (remember the security settings like Windows Update and antivirus software), you can add the shortcut to the IE6 in the virtual machine to the “all users” start menu. This shortcut will be replicated to your windows 7 XP mode start menu, and you will be able to seamlessly start your IE 6 as a normal window on your Windows 7 desktop.   [2] Configure IE6 for the Windows 7 installation   If you configure your XP – Mode to use (Shared Networking)  NAT, you can now use IE6 to browse the sites in the internet. (add proxy settings to IE6 if necessary)                       The problem now is that you can’t connect to the webdev server which is running on your local machine. This is because web development server is crippled to allow only local connections for security reasons.   In order to trick webdev in believing that the requests are coming from local machine itself you can use a light weight proxy like privoxy on your host (windows 7) machine and configure the IE6 running in the virtual host.   The first step is to make the host machine (running windows 7) reachable from the virtual machine (running XP). In order to do that, you can install the loopback adapter, and configure it to use an IP which is routable from the virtual machine. In example screenshot (192.168.1.66).   [3] How to install loopback adapter in Windows 7   After installation you can assign a static IP which is routable from the virtual machine (in example 192.168.1.66)                     The next step is to configure privoxy to listen on that IP address (using some not used port, in example, the default port 8118)   Change following line in config.txt:   # #      Suppose you are running Privoxy on an IPv6-capable machine and #      you want it to listen on the IPv6 address of the loopback device: # #        listen-address [::1]:8118 # # listen-address  192.168.1.66:8118   The last step is to configure the IE6 to use Privoxy which is running on your Windows 7 host machine as proxy for all addresses (including localhost)                             And now you can use your Windows7 XP Mode IE6 to connect to your Visual Studio’s webdev web server.                         [4] http://stackoverflow.com/questions/683151/connect-remotely-to-webdev-webserver-exe

    Read the article

  • How to prevent ‘Select *’ : The elegant way

    - by Dave Ballantyne
    I’ve been doing a lot of work with the “Microsoft SQL Server 2012 Transact-SQL Language Service” recently, see my post here and article here for more details on its use and some uses. An obvious use is to interrogate sql scripts to enforce our coding standards.  In the SQL world a no-brainer is SELECT *,  all apologies must now be given to Jorge Segarra and his post “How To Prevent SELECT * The Evil Way” as this is a blatant rip-off IMO, the only true way to check for this particular evilness is to parse the SQL as if we were SQL Server itself.  The parser mentioned above is ,pretty much, the best tool for doing this.  So without further ado lets have a look at a powershell script that does exactly that : cls #Load the assembly [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' #Create the object $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $SqlArr = Get-Content "C:\scripts\myscript.sql" $Sql = "" foreach($Line in $SqlArr){ $Sql+=$Line $Sql+="`r`n" } $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false $Batch = "" $BatchStart =0 $Start=0 $End=0 $State=0 $SelectColumns=@(); $InSelect = $false $InWith = $false; while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { $Str = $Sql.Substring($Start,($End-$Start)+1) try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null #Write-Host $TokenPrs if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ $InSelect =$true $SelectColumns+="" } if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_FROM){ $InSelect =$false #Write-Host $SelectColumns -BackgroundColor Red foreach($Col in $SelectColumns){ if($Col.EndsWith("*")){ Write-Host "select * is not allowed" exit } } $SelectColumns =@() } }catch{ #$Error $TokenPrs = $null } if($InSelect -and $TokenPrs -ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ if($Str -eq ","){ $SelectColumns+="" }else{ $SelectColumns[$SelectColumns.Length-1]+=$Str } } } OK, im not going to pretend that its the prettiest of powershell scripts,  but if our parsed script file “C:\Scripts\MyScript.SQL” contains SELECT * then “select * is not allowed” will be written to the host.  So, where can this go wrong ?  It cant ,or at least shouldn’t , go wrong, but it is lacking in functionality.  IMO, Select * should be allowed in CTEs, views and Inline table valued functions at least and as it stands they will be reported upon. Anyway, it is a start and is more reliable that other methods.

    Read the article

  • Is VBoxManage guestcontrol passing parameters incorrectly?

    - by Dan Jones
    I had an idea of using my Windows VM (on a Ubuntu host) to open itms:// links (for iTunes) from the host. So, I'm using vboxmanage guestcontrol to make this happen. I have a script (win_vm_launcher.sh) that takes a link as the argument, and passes it to the host like this: vboxmanage guestcontrol "$VM" exec --image 'C:\Windows\System32\cmd.exe' --username "$USER" --password "$PASSWORD" -- /c start "$@" This works if I copy a link from my browser, and change http to itms. E.g., for https://itunes.apple.com/us/album/new-york-city/id3202598, I can do win_vm_launcher.sh itmss://itunes.apple.com/us/album/new-york-city/id3202598 and it works fine. The album opens up in iTunes on my VM. However, when I click a "View in iTunes" link from the iTunes site, it adds an extra parameter to the URI (specifically, the referrer), so it looks something like itmss://itunes.apple.com/us/album/new-york-city/id3202598?ign-msr=https%3A%2F%2Fitunes.apple.com%2Fus%2Falbum%2Fit-came-upon-midnight-clear%2Fid578946739 Unfortunately, if I try to run win_vm_launcher.sh itmss://itunes.apple.com/us/album/new-york-city/id3202598?ign-msr=https%3A%2F%2Fitunes.apple.com%2Fus%2Falbum%2Fit-came-upon-midnight-clear%2Fid578946739 it insteads opens up a regular Command Prompt window with the title "itmss://itunes.apple.com/us/album/new-york-city/id3202598?ign-msr=https%3A%2F%2Fitunes.apple.com%2Fus%2Falbum%2Fit-came-upon-midnight-clear%2Fid578946739". I don't even know how to set the command prompt window title, so I'm not sure how that's happening. If I run the command in the guest, it works fine, opening the album in iTunes: cmd /c start itmss://itunes.apple.com/us/album/new-york-city/id3202598?ign-msr=https%3A%2F%2Fitunes.apple.com%2Fus%2Falbum%2Fit-came-upon-midnight-clear%2Fid578946739 I found a VirtualBox bug that seems somewhat related, but not exactly. It probably doesn't matter, but my host is Ubuntu 12.04, and my guest is Windows 7. So, any idea if vboxmanage is incorrectly passing the arguments, and if so, is there a way around it? If I can't figure out the right way to do it, I'll end up having to process each argument, and stripping out any parameters on any URIs. P.S. I tried creating a batch script (out.bat) like this: echo %1 > %TEMP%/testing.txt and then running it from the host like this: vboxmanage guestcontrol "$VM" exec --image 'C:\Windows\System32\cmd.exe' --username "$USER" --password "$PASSWORD" -- /c "C:\path\to\out.bat" "itmss://itunes.apple.com/us/album/new-york-city/id3202598?ign-msr=https%3A%2F%2Fitunes.apple.com%2Fus%2Falbum%2Fit-came-upon-midnight-clear%2Fid578946739" It ran as expected, and when I open %TEMP%/testing.txt, it contained: "itmss://itunes.apple.com/us/album/new-york-city/id3202598?ign-msr=https%3A%2F%2Fitunes.apple.com%2Fus%2Falbum%2Fit-came-upon-midnight-clear%2Fid578946739" including the quotes. So, it sort of passed the parameter correctly (not sure why it still had quotes), so maybe the problem is with cmd.exe, or even the start command. I'm stymied.

    Read the article

  • Total newb having SSH tunnel and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH tunnel into remote MySQL databases, so pardon my ignorance. I'm using Windows 7 and am needing to connect to a remote MySQL instance on a Linux server. For months I had been using the HeidiSQL client application successfully. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 {this is on the SSH settings tab and was defaulted to 3307 by Heidi, changing it to 3306 gives me a different error: SQL Error (1045) in statement #0: Access denied for user 'mySQLusername'@'localhost' (using password: YES)"} MySQL host: theHostnameOfTheRemoteServer.com {consensus seems to be I should use 'localhost' here} MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • Built-in network card not working?

    - by Zeeshan
    Hi, I am new to Ubuntu. I have installed Ubuntu 9.04(Jaunty). After installation I found that network card is not wokring. And id doest not list in "System Preferenes Network Connections" So , i got another card from my friend and try to search on internat about my problem but still cant find solution. Some commands output is here which may be help to solve problem root@mzeeshan-desktop:/home/mzeeshan# uname -r 2.6.28-11-generic root@mzeeshan-desktop:/home/mzeeshan# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:02:44:4a:45:12 inet addr:192.168.5.37 Bcast:192.168.5.255 Mask:255.255.255.0 inet6 addr: fe80::202:44ff:fe4a:4512/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3774 errors:0 dropped:0 overruns:0 frame:0 TX packets:3611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4307045 (4.3 MB) TX bytes:583067 (583.0 KB) Interrupt:22 Base address:0x1000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) pan0 Link encap:Ethernet HWaddr 5e:25:17:a1:18:ac BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@mzeeshan-desktop:/home/mzeeshan# lspci 00:00.0 Host bridge: Intel Corporation Device 0069 (rev 12) 00:01.0 PCI bridge: Intel Corporation Auburndale/Havendale PCI Express x16 Root Port (rev 12) 00:19.0 Ethernet controller: Intel Corporation Device 10f0 (rev 05) 00:1a.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 1 (rev 05) 00:1c.4 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 5 (rev 05) 00:1c.6 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 7 (rev 05) 00:1c.7 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 8 (rev 05) 00:1d.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Ibex Peak LPC Interface Controller (rev 05) 00:1f.2 IDE interface: Intel Corporation Ibex Peak 4 port SATA IDE Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Ibex Peak SMBus Controller (rev 05) 00:1f.5 IDE interface: Intel Corporation Ibex Peak 2 port SATA IDE Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation GeForce 8400 GS (rev a1) 06:00.0 Multimedia audio controller: Creative Labs SB Live! EMU10k1 (rev 07) 06:00.1 Input device controller: Creative Labs SB Live! Game Port (rev 07) 06:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 06:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link) root@mzeeshan-desktop:/home/mzeeshan# Motherboard is Intel DP55WG. I don't know what to do next. Any help will be greatly appreciated.. Thanks

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >