Search Results

Search found 12924 results on 517 pages for 'module pattern'.

Page 140/517 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Hungry hungry BIOS: why do I have less than 4 GiB of memory?

    - by Rhymoid
    I thought I had 4 GiB of memory, but just to be sure, let's ask the BIOS about that: ?: sudo dmidecode --type 20 # dmidecode 2.12 SMBIOS 2.6 present. Handle 0x000B, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00000000000 Ending Address: 0x0007FFFFFFF Range Size: 2 GB Physical Device Handle: 0x000A Memory Array Mapped Address Handle: 0x000E Partition Row Position: Unknown Interleave Position: Unknown Interleaved Data Depth: Unknown Handle 0x000D, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00080000000 Ending Address: 0x000FFFFFFFF Range Size: 2 GB Physical Device Handle: 0x000C Memory Array Mapped Address Handle: 0x000E Partition Row Position: Unknown Interleave Position: Unknown Interleaved Data Depth: Unknown Alright, 4 GiB it is. But I can't use all of it: ?: cat /proc/meminfo | head -n 1 MemTotal: 3913452 kB Somehow, somewhere, I lost 274 MiB. Where did 6% of my memory go? Now I know the address ranges in DMI are incorrect, because the ACPI memory map reports usable ranges well beyond the ending address of the second memory module: ?: dmesg | grep -E "BIOS-e820: .* usable" [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009e7ff] usable [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000dee7bfff] usable [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000117ffffff] usable I get pretty much the same info from /proc/iomem (except for the 4 kiB hole 0x000-0xFFF), which also shows that the kernel only accounts for less than 8 MiB. I guess 0x00000000-0x7FFFFFFF is indeed mapped to the first memory module, and 0x80000000-0xDFFFFFFF to part of the second memory module (a bunch of ACPI NVS things live between 0xDEE7C000 and 0xDEF30FFF, and the remaining 16-something MiB of that range are just 'reserved'). I guess the highest 0x18000000 bytes of the second memory module are mapped above the 4 GiB mark. But even then, there are two problems: 128 MiB (0x08000000 bytes, living somewhere between 0xE0000000 and 0xFFFFFFFF) are still completely unaccounted for. To note, my graphics card is on PCI-Express and (allegedly) has 1 GiB dedicated memory, so that shouldn't be the culprit. Did the BIOS screw up in moving the memory, leaving it partially shadowed by MMIO? Even with this mediocre explanation, I only 'found' 128 MiB. But /proc/meminfo is reporting a much larger deficit; where's the other 146 MiB? How does Linux count MemTotal?

    Read the article

  • nvidia driver problem after update

    - by baltasar
    I know there are a lot of posts about nvdia driver, but I am not able to solve this. I updated my configuration yesterday, September 27th, and it seems that a new nvidia driver was involved. The updated completed with an error, and invited me to send a bug report automatically. I send it, but it finally said that there was no place to send the bug report to. Today I updated again, and a new kernel was there. Then I rebooted. I found a desktop in 640x480 with a horrible, unreadable font. I run jockey and tried to go back to another version of the nvidia driver, but it seems that none of the drivers listed there are suddenly valid. I have the x-swat repository enabled, because I remember that I had a similar issue in the past that was only to be solved with the newest driver. jockey log: 2012-09-28 11:22:24,747 DEBUG: Selecting previously unselected package nvidia-173. (Reading database ... 536311 files and directories currently installed.) Unpacking nvidia-173 (from .../nvidia-173_173.14.35-0ubuntu0.2_i386.deb) ... Processing triggers for man-db ... Setting up nvidia-173 (173.14.35-0ubuntu0.2) ... Loading new nvidia-173-173.14.35 DKMS files... Building only for 3.2.0-32-generic-pae Building for architecture i686 Building initial module for 3.2.0-32-generic-pae Error! Bad return status for module build on kernel: 3.2.0-32-generic-pae (i686) Consult /var/lib/dkms/nvidia-173/173.14.35/build/make.log for more information. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... 2012-09-28 11:22:25,143 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-09-28 11:22:25,143 ERROR: XorgDriverHandler.enable(): package or module not installed, aborting 2012-09-28 11:22:53,613 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:22:53,613 DEBUG: KMH enabled: False 2012-09-28 11:22:53,629 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:22:53,629 DEBUG: KMH enabled: False 2012-09-28 11:23:01,943 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,943 DEBUG: KMH enabled: False 2012-09-28 11:23:01,962 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,963 DEBUG: KMH enabled: False 2012-09-28 11:23:01,998 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,998 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-28 11:23:02,044 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,044 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,066 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,066 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,106 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,106 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-28 11:23:02,157 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,157 DEBUG: KMH enabled: False 2012-09-28 11:23:02,177 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,178 DEBUG: KMH enabled: False 2012-09-28 11:23:02,245 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,245 DEBUG: KMH enabled: False 2012-09-28 11:23:02,272 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,273 DEBUG: KMH enabled: False 2012-09-28 11:23:02,303 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,303 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-28 11:23:02,330 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,410 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,427 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,428 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,457 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,458 DEBUG: nvidia_current_updates is not the alternative in use

    Read the article

  • Unable to fix broken packages with sudo apt-get install -f

    - by Bob
    Here's my result, of sudo apt-get install -f. i have Ran it twice and got negative result. I believe there is an error at "error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English" This same statement "error in Version string, caused me three days of attempting to download version 12.04. There is a bug report concerning the quoted text as well. Is there anyway to download the version without the language packs, why would I corrupt version 11.10? Also, when attempting to download Synaptic using sudo apt-get install synaptic, I get the same error message. Again I point out the initial download problems and the same error message receipt. Thanks b0b@b0b-IC780M-A:~$ sudo apt-get install -f [sudo] password for b0b: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get upgrade install Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: linux-headers-generic software-center The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner appmenu-qt apport apport-gtk apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common checkbox checkbox-gtk command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc deja-dup desktop-file-utils dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hpijs hplip hplip-cups hplip-data indicator-datetime indicator-session indicator-sound isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libarchive1 libasound2-plugins libatk-adaptor libbind9-60 libbrasero-media3-1 libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libgconf2-4 libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgoa-1.0-0 libgrip0 libgs9 libgs9-common libgtk-3-bin libgtksourceview-3.0-0 libgtksourceview-3.0-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-zeroconf1.0-cil libnautilus-extension1 libnm-glib-vpn1 libnm-glib4 libnm-util2 libnotify0.4-cil libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libreoffice-emailmerge libreoffice-style-human libsane-hpaio libsmbclient libsnmp-base libsnmp15 libsyncdaemon-1.0-1 libt1-5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libunity-2d-private0 libunity-core-4.0-4 libunity6 libusbmuxd1 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxml2 linux-generic linux-image-generic metacity metacity-common mobile-broadband-provider-info modemmanager mousetweaks multiarch-support nautilus nautilus-data nautilus-sendto-empathy network-manager nux-tools onboard openssl pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-cups python-cupshelpers python-gobject-cairo python-httplib2 python-launchpadlib python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support tomboy totem totem-common totem-mozilla totem-plugins ttf-opensymbol ubuntu-desktop ubuntu-minimal ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch unity unity-2d unity-2d-launcher unity-2d-panel unity-2d-places unity-2d-spread unity-common unity-lens-applications unity-services update-manager update-manager-core update-notifier update-notifier-common usbmuxd vim-common vim-tiny vinagre vino xorg xserver-xorg xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xul-ext-ubufox 296 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Need to get 0 B/159 MB of archives. After this operation, 10.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y Extracting templates from packages: 100% Preconfiguring packages ... dpkg: error: parsing file '/var/lib/dpkg/available' near line 4131 package 'python-zope.interface': error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English . language-pack-en-base provides the bulk of translation data and is updated only seldom. This package provides frequent translation updates.': version string has embedded spaces E: Sub-process /usr/bin/dpkg returned an error code (2) b0b@b0b-IC780M-A:~$

    Read the article

  • Silverlight Reporting Application Part 3.5 - Prism Background and WCF RIA [Series Intermission]

    Taking a step back before I dive into the details and full-on coding fun, I wanted to once again respond to a comment on my last post to clear up some things in regards to how I'm setting up my project and some of the choices I've made. Aka, thanks Ben. :) Prism Project Setup For starters, I'm not the ideal use case for a Prism application. In most cases where you've got a one-man team, Prism can be overkill as it is more intended for large teams who are geographically dispersed or in applications that have a larger scale than my Recruiting application in which you'll greatly benefit from modularity, delayed loading of xaps, etc. What Prism offers, though, is a manner for handling UI, commands, and events with the idea that, through a modular approach in which no parts really need to know about one another, I can update this application bit by bit as hiring needs change or requirements differ between offices without having to worry that changing something in the Jobs module will break something in, say, the Scheduling module. All that being said, here's a look at how our project breakdown for Recruit (MVVM/Prism implementation) looks: This could be a little misleading though, as each of those modules is actually another project in the overall Recruit solution. As far as what the projects actually are, that looks a bit like this: Recruiting Solution Recruit (Shell up there) - Main Silverlight Application .Web - Default .Web application to host the Silverlight app Infrastructure - Silverlight Class Library Project Modules - Silverlight Class Library Projects Infrastructure &Modules The Infrastructure project is probably something you'll see to some degree in any composite application. In this application, it is going to contain custom commands (you'll see the joy of these in a post or two down the road), events, helper classes, and any custom classes I need to share between different modules. Think of this as a handy little crossroad between any parts of your application. Modules on the other hand are the bread and butter of this application. Besides the shell, which holds the UI skeleton, and the infrastructure, which holds all those shared goodies, the modules are self-contained bundles of functionality to handle different concerns. In my scenario, I need a way to look up and edit Jobs, Applicants, and Schedule interviews, a Notification module to handle telling the user when different things are happening (i.e., loading from database), and a Menu to control interaction and moving between different views. All modules are going to follow the following pattern: The module class will inherit from IModule and handle initialization and loading the correct view into the correct region, whereas the Views and ViewModels folders will contain paired Silverlight user controls and ViewModel class backings. WCF RIA Services Since we've got all the projects in a single solution, we did not have to go the route of creating a WCR RIA Services Class Library. Every module has it's WCF RIA link back to the main .Web project, so the single Linq-2-SQL (yes, I said Linq-2-SQL, but I'll soon be switching to OpenAccess due to the new visual designer) context I'm using there works nicely with the scope of my project. If I were going for completely separating this project out and doing different, dynamically loaded elements, I'd probably go for the separate class library. Hope that clears that up. In the future though, I will be using that in a project that I've got in the "when I've got enough time to work on this" pipeline, so we'll get into that eventually- and hopefully when WCF RIA is in full release! Why Not use Silverlight Navigation/Business Template? The short answer- I'm a creature of habit, and having used Silverlight for a few years now, I'm used to doing lots of things manually. :) Plus, starting with a blank slate of a project I'm able to set up things exactly as I want them to be. In this case, rather than the navigation frame we would see in one of the templates, the MainRegion/ContentControl is working as our main navigation window. In many cases I will use theSilverlight navigation template to start things off, however in this case I did not need those features so I opted out of using that. Next time when I actually hit post #4, we're going to get into the modules and starting to get functionality into this application. Next week is also release week for the Q1 2010 release, so be sure to check out our annualWebinar Week (I might be biased, but Wednesday is my favorite out of the group). Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Alpha plugin in GStreamer not working

    - by Miguel Escriva
    Hi! I'm trying to compose two videos, and I'm using the alpha plug-in to make the white color transparent. To test the alpha plug-in I'm creating the pipeline with gst-launch. The first test I done was: gst-launch videotestsrc pattern=smpte75 \ ! alpha method=custom target-r=255 target-g=255 target-b=255 angle=10 \ ! videomixer name=mixer ! ffmpegcolorspace ! autovideosink \ videotestsrc pattern=snow ! mixer. and it works great! Then I created two videos with those lines: gst-launch videotestsrc pattern=snow ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=snow.ogv gst-launch videotestsrc pattern=smpte75 ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=bars75.ogv And changed the videotestsrc to a filesrc and it continues working gst-launch filesrc location=bars75.ogv ! decodebin2 \ ! alpha method=custom target-r=255 target-g=255 target-b=255 angle=10 \ ! videomixer name=mixer ! ffmpegcolorspace ! autovideosink \ filesrc location=snow.ogv ! decodebin2 ! alpha ! mixer. But, when I use the ideo I want to compose, I'm not able to make the white color transparent gst-launch filesrc location=video.ogv ! decodebin2 \ ! alpha method=custom target-r=255 target-g=255 target-b=255 angle=10 \ ! videomixer name=mixer ! ffmpegcolorspace ! autovideosink \ filesrc location=snow.ogv ! decodebin2 ! alpha ! mixer. Can you help me? Any idea what is happening? I'm using GStreamer 0.10.28 You can download the test videos from here: http://polimedia.upv.es/pub/gst/gst.zip Thanks in advance, Miguel Escriva

    Read the article

  • wget: Turn Off Forced .html Retreival

    - by Mike B
    When performing a recursive download, I specify a pattern via the -R parameter for wget to reject, but if this file is a HTML file, wget downloads the file regardless of whether or not it matches the pattern. e.g. wget -r -R "*dynamicfile*" example.com still retrieves files such as example.com/dynamicfile1.html Is there a way to prevent this?

    Read the article

  • IIS URL Rewrite - Redirect any HTTPS traffic to sub-domain

    - by uniquelau
    We have an interesting hosting environment that dictates all secure traffic must travel over a specific sub domain. E.g. http://secure.domain.com/my-page I'd like to handle this switch using URL Rewrite, i.e. at server level, rather than application level. My cases are: https://secure.domain.com/page = NO CHANGE, remains the same https://domain.com/page = sub-domain inserted, https://secure.domain.com/page https://www.domain.com/page = remove 'www', insert sub-domain In my mind the logic is: INPUT = Full Url = http://www.domain.com/page If INPUT contains HTTPS Then check Full URL, does it contain 'secure'? If YES do nothing, if no add 'secure' If INPUT contains 'www' remove 'www' The certificate is not a wild card (e.g. top level domain) and is issues to: https://secure.domain.com/ The website could also be hosted in a staging environment. E.g. https://secure.environment.domain.com/ I do not have control over 'environment' or 'domain' or the 'tld'. Laurence - Update 1, 19th August So as mentioned below, the trick here is to avoid a redirect loop that could drive anyone well loopy. This is what I propose: One rule to force certain traffic to the secure domain: <rule name="Force 'Umbraco' to secure" stopProcessing="true"> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_URI}" pattern="^/umbraco/(.+)$" ignoreCase="true" /> <add input="{HTTP_HOST}" negate="true" pattern="^secure\.(.+)$" /> </conditions> <action type="Redirect" url="https://secure.{HTTP_HOST}/{R:0}" redirectType="Permanent" /> </rule> Another rule, that then removes the secure domain, expect for traffic on the secure domain. <rule name="Remove secure, expect for Umbraco" stopProcessing="true"> <match url="(.*)" ignoreCase="true" /> <conditions logicalGrouping="MatchAll"> <add input="{HTTP_HOST}" pattern="^secure\.(.+)$" /> <add input="{REQUEST_URI}" negate="true" pattern="^/umbraco/(.+)$" ignoreCase="true" /> </conditions> <!-- Set Domain to match environment --> <action type="Redirect" url="http://staging.domain.com/{R:0}" appendQueryString="true" redirectType="Permanent" /> </rule> This works for a single directory or group of files, however I've been unable to add additional logic into those two rules. For example you might have 3 folders that need to be secure, I tried adding these as Negate records, but then no redirection happens at all. Hmmm! L

    Read the article

  • Apache MatchRedirect exception regex

    - by Arash Mousavi
    I want to redirect any URL that is Https and hasn't start with "system_" to the same URL with http. for exapmle for this url : https://exsite.tld/some/thing/that/not/start/with/pattern to : http://exsite.tld/some/thing/that/not/start/with/pattern but this url: https://exsite.tld/system_aas3f4 Shouldn't redirect. I try: RedirectMatch ^/?((?!(system_)).*) http://exsite.tld/$1 but it won't work. I don't know what's the problem.

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Problem deploying GWT application on apache and tomcat using mod_jk

    - by Colin
    I'm trying to deploy a GWT app on Apache using mod_jk connector. I have compiled the application and tested it on tomcat on the address localhost:8080/loginapp and it works ok. However when I deploy it to apache using mod_jk I get the starter page which gives me a login form but trying to login I get this error 404 Not Found Not Found The requested URL /loginapp/loginapp/login was not found on this server Looking at the apache log files i see this [Thu Jan 13 13:43:17 2011] [error] [client 127.0.0.1] client denied by server configuration: /usr/local/tomcat/webapps/loginapp/WEB-INF/ [Thu Jan 13 13:43:26 2011] [error] [client 127.0.0.1] File does not exist: /usr/local/tomcat/webapps/loginapp/loginapp/login, referer: http://localhost/loginapp/LoginApp.html The mod_jk configurations on my apache2.conf file are as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" <IfModule mod_jk.c> Alias /loginapp "/usr/local/tomcat/webapps/loginapp/" <Directory "/usr/local/tomcat/webapps/loginapp/"> Options Indexes +FollowSymLinks AllowOverride None Allow from all </Directory> <Location /*/WEB-INF/*> AllowOverride None deny from all </Location> JkMount /loginapp/*.html loginapp My workers.properties file is as follows workers.tomcat_home=/usr/local/tomcat workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=loginapp worker.loginapp.type=ajp13 worker.loginapp.host=localhost worker.loginapp.port=8009 worker.loginapp.cachesize=10 worker.loginapp.cache_timeout=600 worker.loginapp.socket_keepalive=1 worker.loginapp.recycle_timeout=300 worker.loginapp.lbfactor=1 And this is my servlet mappings for my app on the application's web.xml <servlet> <servlet-name>loginServlet</servlet-name> <servlet-class>com.example.loginapp.server.LoginServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>loginServlet</servlet-name> <url-pattern>/loginapp/login</url-pattern> </servlet-mapping> <servlet> <servlet-name>myAppServlet</servlet-name> <servlet-class>com.example.loginapp.server.MyAppServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>myAppServlet</servlet-name> <url-pattern>/loginapp/mapdata</url-pattern> </servlet-mapping> Ive tried everything and it seems to still elude me. Even tried changing the deny from all directive on the WEBINF folder to allow from all and still it doesnt work. Maybe im missing something. Any help will be highly appreciated.

    Read the article

  • How to copy with cp to include hidden files and hidden directories and their contents?

    - by eleven81
    How can I make cp -r copy absolutely all of the files and directories in a directory Requirements: Include hidden files and hidden directories. Be one single command with an flag to include the above. Not need to rely on pattern matching at all. My ugly, but working, hack is: cp -r /etc/skel/* /home/user cp -r /etc/skel/.[^.]* /home/user How can I do this all in one command without the pattern matching? What flag do I need to use?

    Read the article

  • taskkill - end tasks with window titles ending with a specific string

    - by DBZ_A
    I need to write a batch program to end all MS office communicator tasks with window titles (usually ending with pattern "- Conversation" . I tried taskkill /FI "WINDOWTITLE eq *Conversation" /IM communicator.exe but the wildcard pattern starting with a '*' does not seem to work. Gives the folowing error ERROR: The search filter cannot be recognized. any suggestions for a workaround would be greatly appreciated!

    Read the article

  • Renaming dates under windows cmd

    - by ldigas
    I have a bunch of directories (folders, if you like) that follow this pattern \20121022 Description of the directory's contents goes here\ (some don't have it, just the date) and I need to rename them to follow the following pattern \2012-10-22 Description of the directory's contents ...\ Is there a way to do it using Windows cmd and the tools that come with it (namely, ren)? If not, what would be the most portable way to do it? I have a very restricted set of privileges on the machine I'm doing this on.

    Read the article

  • Brother ink printer: vertical adjustment of alignment does not work

    - by Ralf
    I have a brother mfc-5440cn and tried to adjust the vertical alignment with the result that the printing quality now is very poor. The printer prints a small schadow next to each letter :-( I tried to re-adjust the alignment, but for the 600dpi settings, there is no pattern that matches the 0-pattern (you know what I mean when you've once adjsuted a brother printer :-) Is there a way to make a factory reset?

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Cannot import resource > "app/config/security.yml" from "/app/config/config.yml"

    - by tirengarfio
    Im getting this error: FileLoaderLoadException: Cannot import resource "app/config/security.yml" from "/app/config/config.yml". The file security.yml is on the right path. This is my security.yml file: jms_sapp/confiapp/config/security.yml secure_all_services: false exprapp/confiapp/config/security.yml security: encoders: Symfony\Component\Security\Core\User\User: plaintext role_hierarchy: ROLE_ADMIN: ROLE_USER ROLE_SUPER_ADMIN: [ROLE_USER, ROLE_ADMIN, ROLE_ALLOWED_TO_SWITCH] providers: in_memory: memory: users: user: { password: userpass, roles: [ 'ROLE_USER' ] } admin: { password: adminpass, roles: [ 'ROLE_ADMIN' ] } firewalls: dev: pattern: ^/(_(profiler|wdt)|css|images|js)/ security: false login: pattern: ^/demo/secured/login$ security: false secured_area: pattern: ^/demo/secured/ form_login: check_path: /demo/secured/login_check login_path: /demo/secured/login logout: path: /demo/secured/logout target: /demo/ #anonymous: ~ #http_basic: # realm: "Secured Demo Area" access_control: #- { path: ^/login, roles: IS_AUTHENTICATED_ANONYMOUSLY, requires_channel: https } #- { path: ^/_internal/secure, roles: IS_AUTHENTICATED_ANONYMOUSLY, ip: 127.0.0.1 }

    Read the article

  • Editing Data in Child Window with RIA Services and Silverlight 4

    - by Rick Arthur
    Is it possible to edit data in a SilverLight Child window when using RIA Services and Silverlight 4? It sounds like a simple enough question, but I have not been able to get any combination of scenarios to work. Simply put, I am viewing data in a grid that was populated through a DomainDataSource. Instead of editing the data on the same screen (this is the pattern that ALL of the Microsoft samples seem to use), I want to open a child window, edit the data and return. Surely this is a common design pattern. If anyone knows of a sample out there that uses this pattern, a link would be much appreciated. Thanks, Rick Arthur

    Read the article

  • Resolving SNAPSHOT dependencies with timestamps from Ivy

    - by bradhouse
    I am attempting to resolve timestamped SNAPSHOT dependencies with Ivy. The environment is Ant + Ivy 1.2.0 + Archiva. Archiva itself is populated from Maven2 builds. Ivy is only used to resolve dependencies (from a single, non Maven2 project). How can Ivy be configured to correctly resolve timestamped artifacts from an Archiva or m2 repository? For reference my current configuration is: ivysettings.xml looks similar to: <ivysettings> <settings defaultResolver="archiva-chain"/> <resolvers> <chain name="archiva-chain" changingPattern=".*SNAPSHOT" checkmodified="true"> <ibiblio name="archiva-internal" m2compatible="true" usepoms="true" pattern="[organization]/[module]/[revision]/[artifact]-[revision].[ext]" root="http://host:port/archiva/repository/internal"/> <ibiblio name="archiva-deploy" m2compatible="true" usepoms="true" pattern="[organization]/[module]/[revision]/[artifact]-[revision].[ext]" root="http://host:port/archiva/repository/deploy"/> <ibiblio name="archiva-snapshots" m2compatible="true" usepoms="true" pattern="[organization]/[module]/[revision]/[artifact]-[revision].[ext]" root="http://host:port/archiva/repository/snapshots"/> </chain> </resolvers> </ivysettings> The ivy.xml dependencies are simple: <ivy-module version="2.0"> <info organisation="com.myorg" module="myapp"/> <dependencies> <dependency org="com.myorg" name="myartifact" rev="1.8.0-SNAPSHOT" changing="true"/> </dependencies> </ivy-module> Ivy does not attempt to resolve a timestamped artifact. E.g. [ivy:retrieve] :: problems summary :: [ivy:retrieve] :::: WARNINGS [ivy:retrieve] module not found: com.myorg#myartifact;1.8.0-SNAPSHOT [ivy:retrieve] ==== archiva-internal: tried [ivy:retrieve] -- artifact com.myorg#myartifact;1.8.0-SNAPSHOT!myartifact.jar: [ivy:retrieve] http://host:port/archiva/repository/internal/com.myorg/myartifact/1.8.0-SNAPSHOT/myartifact-1.8.0-SNAPSHOT.jar [ivy:retrieve] ==== archiva-deploy: tried [ivy:retrieve] -- artifact com.myorg#myartifact;1.8.0-SNAPSHOT!myartifact.jar: [ivy:retrieve] http://host:port/archiva/repository/deploy/com.myorg/myartifact/1.8.0-SNAPSHOT/myartifact-1.8.0-SNAPSHOT.jar [ivy:retrieve] ==== archiva-snapshots: tried [ivy:retrieve] -- artifact com.myorg#myartifact;1.8.0-SNAPSHOT!myartifact.jar: [ivy:retrieve] http://host:port/archiva/repository/snapshots/com.myorg/myartifact/1.8.0-SNAPSHOT/myartifact-1.8.0-SNAPSHOT.jar [ivy:retrieve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:retrieve] :: UNRESOLVED DEPENDENCIES :: [ivy:retrieve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:retrieve] :: com.myorg#myartifact;1.8.0-SNAPSHOT: not found [ivy:retrieve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:retrieve] [ivy:retrieve] [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS There is a maven-metadata.xml in snapshots/com/myorg/myartifact: <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.myorg</groupId> <artifactId>myartifact</artifactId> <versioning> <latest>1.8.0-SNAPSHOT</latest> <versions> <version>1.3.0-SNAPSHOT</version> <version>1.4.2-SNAPSHOT</version> <version>1.6.1-SNAPSHOT</version> <version>1.8.0-SNAPSHOT</version> </versions> <lastUpdated>20100303003206</lastUpdated> </versioning> </metadata> The maven-metadata.xml in snapshots/com/myorg/myartifact/1.8.0-SNAPSHOT: <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.myorg</groupId> <artifactId>myartifact</artifactId> <version>1.8.0-SNAPSHOT</version> <versioning> <snapshot> <buildNumber>7</buildNumber> <timestamp>20100303.003206</timestamp> </snapshot> <lastUpdated>20100303003206</lastUpdated> </versioning> </metadata> Not all that useful, but for completeness, the files in the directory snapshots/com/myorg/myartifact/1.8.0-SNAPSHOT for the referenced snapshot: -rw-r--r-- 1 archiva archiva 240670 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.jar -rw-r--r-- 1 archiva archiva 32 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.jar.md5 -rw-r--r-- 1 archiva archiva 40 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.jar.sha1 -rw-r--r-- 1 archiva archiva 4068 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.pom -rw-r--r-- 1 archiva archiva 32 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.pom.md5 -rw-r--r-- 1 archiva archiva 40 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.pom.sha1 -rw-r--r-- 1 archiva archiva 180821 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7-sources.jar -rw-r--r-- 1 archiva archiva 32 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7-sources.jar.md5 -rw-r--r-- 1 archiva archiva 40 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7-sources.jar.sha1

    Read the article

  • GWT Query fails second time -only.

    - by Koran
    HI, I have a visualization function in GWT which calls for two instances of the same panels - two queries. Now, suppose one url is A and the other url is B. Here, I am facing an issue in that if A is called first, then both A and B works. If B is called first, then only B works, A - times out. If I call both times A, only the first time A works, second time it times out. If I call B twice, it works both times without a hitch. Even though the error comes at timed out, it actually is not timing out - in FF status bar, it shows till - transferring data from A, and then it gets stuck. This doesnt even show up in the first time query. The only difference between A and B is that B returns very fast, while A returns comparitively slow. The sample code is given below: public Panel(){ Runnable onLoadCallback = new Runnable() { public void run() { Query query = Query.create(dataUrl); query.setTimeout(60); query.send(new Callback() { public void onResponse(QueryResponse response) { if (response.isError()){ Window.alert(response.getMessage()); } } } } VisualizationUtils.loadVisualizationApi(onLoadCallback, PieChart.PACKAGE); } What could be the reason for this? I cannot think of any reason why this should happen? Why is this happening only for A and not for B? EDIT: More research. The query which works all the time (i.e. B is the example URL given in GWT visualization site: see comment [1]). So, I tried in my app engine to reproduce it - the following way s = "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});"; response = HttpResponse(s, content_type="text/plain; charset=utf-8") response['Expires'] = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime()) return response Where s is the data when we run the query for B. I tried to add Expires etc too, since that seems to be the only header which has the difference, but now, the query fails all the time. For more info - I am now sending the difference between my server response vs the working server response. They seems to be pretty similar. HTTP/1.0 200 OK Content-Type: text/plain Date: Wed, 16 Jun 2010 11:07:12 GMT Server: Google Frontend Cache-Control: private, x-gzip-ok="" google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Mac$ telnet spreadsheets.google.com 80 Trying 209.85.231.100... Connected to spreadsheets.l.google.com. Escape character is '^]'. GET http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1 HTTP/1.0 200 OK Content-Type: text/plain; charset=UTF-8 Date: Wed, 16 Jun 2010 11:07:58 GMT Expires: Wed, 16 Jun 2010 11:07:58 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Server: GSE google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Also, please note that App engine did not allow the Expires header to go through - can that be the reason? But if that is the reason, then it should not fail if B is sent first and then A. Comment [1] : http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1

    Read the article

  • Memory leak in WPF app due to DelegateCommand.

    - by Abdullah BaMusa
    I just finished desktop apps written in WPF and c# using MVVM pattern. In this app I used Delegate Command implementation to wrap the ICommands properties exposed in my ModelView. The problem is these DelegateCommands prevent my ModelView and View from being garbage collected after closing the view. So it stays larking until I terminate the whole application. I profile the application I find it’s all about delegatecommand that keeping the modelview in memory. How could I avoid this situation and is this in nature of mvvm pattern, or it’s about my implantation of the pattern?. Thanks

    Read the article

  • Problem with Regex in .NET (C#)

    - by Craig Bovis
    I'm trying to write a a regex to validate a string to match the following rules. Must start with a-z (case insensitive) Must only contain a-z A-Z 0-9 . - I've put something together based on my limited knowledge and ran it through an online testing tool for a whole bunch of situations and the results were as I had hoped however when I place the pattern into my .NET code it doesn't match correctly. The pattern I am using is, [a-zA-Z][a-zA-Z0-9.\-]* Is this the correct pattern or am I barking up the wrong tree? Some examples of what I'm expecting. craig.bovis - VALID 24craig - INVALID craig@bovis - INVALID craig24 - VALID -craig24 - INVALID craig24.bovis-test - VALID

    Read the article

  • Regex matching very slow

    - by Ali Lown
    I am trying to parse a PDF to extract the text from it (please don't suggest any libraries to do this, as this is part of learning the format). I have already handled deflating it to put it in the alphanumeric format. I now need to extract the text from the text blocks. So, my current pattern is "BT.*?((.*?)).*?ET" (with DOTMATCHALL set) to match something like: BT /F13 12 Tf 288 720 Td (ABC) Tj ET The only bit I want is the text ABC in the brackets. The above pattern works, but is really slow, I assume it is because the regex library is failing to match the pattern that matches the text between BT and the (ABC) many times. The regex is pre-compiled in an attempt to speed it up, but it seems negligible. How may I speed this up?

    Read the article

  • double checked locking - objective c

    - by bandejapaisa
    I realised double checked locking is flawed in java due to the memory model, but that is usually associated with the singleton pattern and optimizing the creation of the singleton. What about under this case in objective-c: I have a boolean flag to determine if my application is streaming data or not. I have 3 methods, startStreaming, stopStreaming, streamingDataReceived and i protect them from multiple threads using: - (void) streamingDataReceived:(StreamingData *)streamingData { if (self.isStreaming) { @synchronized(self) { if (self.isStreaming) { - (void) stopStreaming { if (self.isStreaming) { @synchronized(self) { if (self.isStreaming) { - (void) startStreaming:(NSArray *)watchlistInstrumentData { if (!self.isStreaming) { @synchronized(self) { if (!self.isStreaming) { Is this double check uneccessary? Does the double check have similar problems in objective-c as in java? What are the alternatives to this pattern (anti-pattern). Thanks

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >