Search Results

Search found 27396 results on 1096 pages for 'process template'.

Page 324/1096 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • IIS7.5 Outbound Rule for lower case URLs in <a href="...">

    - by Quog
    Hi, I know how to canonicalise the case of URLs on incoming request to IIS7.5, in fact, there's a built in rule template to start from. But how about outbound (without changing the code)? This is where I got to so far: <outboundRules> <rule name="Outbound lowercase" preCondition="IsHTML" enabled="true"> <match filterByTags="A" pattern="[A-Z]" ignoreCase="false" /> <action type="Rewrite" value="{ToLower:{R:0}}" /> </rule> <preConditions> <preCondition name="IsHTML"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> </preCondition> </preConditions> </outboundRules> However, IIS barfs on the action with a 500 implying an invalid web.config, probably on the {ToLower:XXXX} which I stole from the MS-supplied inbound rule template. Anyone know how to do this? Anyone know where the options are fully documented (my GoogleNinja skills failed me: I found this but "Specifies value syntax for the rule. This element is available only for the Rewrite action type" is not really comprehensive). Thanks, Damian

    Read the article

  • Netcat (nc) traditional package for RHEL 6.x?

    - by HTTP500
    I'm trying to use the Percona Apache Monitoring [Cacti] Template for Memcached. They do indeed warn that you can't use the openbsd version of the package and provide a solution for Ubuntu/Debian users, i.e.: You need nc on the server. Some versions of nc accept different command-line options. You can change the options used by configuring the PHP script. If you don’t want to do this for some reason, then you can install a version of nc that conforms to the expectations coded in the script’s default configuration instead. On Debian/Ubuntu, netcat-openbsd does not work, so you need the netcat-traditional package, and you need to switch to /bin/nc.traditional... Since the RHEL 6.x version indeed comes from openbsd (confirmed by rpm -qi nc) how does one go about getting this installed on RHEL/CentOS? Anyone else running these Percona templates on RHEL/CentOS? What did you do? alien the Debian package? Update 1: FWIW, I tried to use GNU netcat by compiling it from source but it doesn't seem to have the exact options required by the Cacti template either (i.e. there is no analogy for -C or -q1 so it seems) Update 2: I alien[ed] the netcat-traditional_1.10-38_amd64.deb package to make a .tgz and it does produce a binary "nc.traditional" and that version has the -q option but no -C Cheers

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Has anyone used tools like (Chef, Salt, Puppet, CfEngine) to configure a 2008 Win Server with Sql?

    - by Development 4.0
    I have been looking into tools to automate the creation of servers. For two different reasons: Production Development machines I love the idea of the immutable server. I have seen the tools demoed and used successfully on *nix boxes running Rails or Lamp etc. Has anyone found a good way to do this in the Microsoft stack? I would like to get in on the fun and create scripts that will install Windows, patch it according to specification, deploy Sql Server create scripts to build out a database and just for fun deploy SharePoint and configure it, and then deploy a SharePoint solution to it. I can get part of the way, install Windows manually, install Sql Server manually, use Powershell to do all the configuration and setup. Install SharePoint and configure part of it, then powershell for the rest of the configuration and deploying a solution. I would love to have the ability to run one script though, or at least one unified process. I can, and have mostly used VM template images and then instantiated them, but the creation of the template is usually a manual step.

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • Sudoers file allow sudo on specific file for active directory group

    - by tubaguy50035
    I have active directory sign in working on an Ubuntu 12.04 box. When the user signs in, I have a script that runs that needs sudo permission (since it modifies the samba config file). How would I specify this in my sudoer's file? I've tried: %DOMAIN\\AD+Programmers ALL=NOPASSWD: /usr/local/bin/createSambaShare.php I've found various resources on the internet stating that this is how it would be done, but I'm not sure that I have the first part right. What are they using as the DOMAIN? The workgroup or the realm? I use Samba + winbind for active directory integration. Here's my smb.conf: [global] security = ads netbios name = hostname realm = COMPANYNAME.COM password server = passwordserver workgroup = COMPANYNAME idmap uid = 1000-10000 idmap gid = 1000-10000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no EDIT: The users that should have access to run that script are all part of the Programmers group which has an Active Directory Domain Services Folder of Company.com/Staff/Security Groups (not sure if that matters or not).

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Virtualmin: Automatically create SSL based website with a shared SSL wildcard cert?

    - by Josh
    I managed to configure this very nicely under cPanel/WHM, but I am having trouble configuring it under Virtualmin: when I create a new Virtual Server in Virtualmin, I want it to automatically create an Apache with a subdomain of a shared wildcard SSL domain. So for example, if I create a virtual server for some.example.com, I want two VirtualHosts: <VirtualHost 1.2.3.4:80> ServerName some.example.com ServerAlias www.some.example.com some_example.shared-ssl-domain.com ... </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName some_example.shared-ssl-domain.com ... SSLEngine on SSLCertificateFile /path/to/shared-ssl-domain.com.crt SSLCertificateKeyFile //path/to/shared-ssl-domain.com.key SSLCACertificateFile /path/to/shared-ssl-domain.com.cabundle </VirtualHost> in cPanel/WHM I was able to do this easily because the template file contained the <VirtualHost> and </VirtualHost> directives. But Virtualmin's template does now. is there any way I can set up Virtualmin to do what I want?

    Read the article

  • How to configure my 404 response

    - by Evylent
    How would I be able to correctly redirect a person who visits my site to my 404 page? I have already created my 404.php file as: <!DOCTYPE html> <html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Page not found | Twilight of Spirits</title> <link rel="stylesheet" href="http://forum.umbradora.net/template/default/css/404.css"> <link rel="icon" type="image/x-icon" href="/favicon.png"> </head> <body> <div id="error"> <a href="http://forum.umbradora.net/"> <img src="/forum/template/default/images/layout/404.png" alt="404 page not found" id="error404-image"> </a> </div> <div id="mixpanel" style="visibility: hidden; "></div></body></html> My .htaccess file is: ErrorDocument 404 http://forum.umbradora.net/404.php Now when I go to my site and enter a false link such as mack.php or total.html, I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Any ideas on how to solve this? I have tried switching from subdomain to my normal path, still get errors.

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Apache in MAC OS X

    - by Michal K.
    I have problems with apache on MAC OS Lion 10.7.5. I have VirtualHosts: <VirtualHost *:80> ServerName devel.dev DocumentRoot /var/www </VirtualHost> <VirtualHost *:80> ServerName test.dev DocumentRoot /var/www <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have configured httpd.conf to ServerRoot = /usr/htdocs (empty directory). test.dev and devel.dev says always "It works!". Why ? In /var/www I have index.php which contains only one letter "k" (for tests). Edit, more info: I have restart apache milion times. File with VirtualHosts is included. error.log: [Tue Oct 02 20:03:55 2012] [notice] caught SIGTERM, shutting down [Tue Oct 02 20:03:55 2012] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Tue Oct 02 20:03:55 2012] [notice] Digest: generating secret for digest authentication ... [Tue Oct 02 20:03:55 2012] [notice] Digest: done [Tue Oct 02 20:03:55 2012] [notice] Apache/2.2.22 (Unix) DAV/2 PHP/5.3.15 with Suhosin-Patch configured -- resuming normal operations When I stop apache, localhost still displays It works!

    Read the article

  • Very high Magento/Apache memory usage even without visitors (are we fooled by our hosting company?)

    - by MrDobalina
    I am no server guy and we have issues with our speed so I come here asking for advise. We have a VPS with 2 cores and 2gb of RAM at a Magento specialized hosting company. Over the course of the last weeks our site speed has gotten worse, even though our store is new, has less than 1000 SKUs and not even 100 visitos a day. At magespeedtest.com we only get 1.87 trans/sec @ 2.11 secs each with a mere 5 concurrent users. Our magento log files are clean, we have no huge database tables or anything like that. When we take a look at our server real time stats, we see that the memory usage jumped up from about 34% to 71% and now 82% in just a few days in idle, with no visitors on the site. Our hosting company said that we do not need to worry about that as it`s maybe related to mysql which creates buffers (which are maybe not even actually being used) and what is important is CPU and swap - stats are ok here. They also said that the low benchmark scores are caused by bad extensions or template modifications on our side. We are not sure if we can trust that statement as we only have 4 plugins installed (all from aheadworks and amasty which are known to be one of the best magento extension developers). Our template modifications are purely html and css, no modifications to the php code. Our pagespeed is ranked with 93/100 in firebug and Magento is properly configured, so the problem really just gets obvious when there are a handful of users on the site at the same time. Can anyone confirm our hosting`s statement about memory usage and where can I start looking for a solution?

    Read the article

  • marshal data too short!!!

    - by Nitin Garg
    My application requires to keep large data objects in session. There are like 3-4 data objects each created by parsing a csv containing 150 X 20 cells having strings of 3-4 characters. My application shows this error- "marshal data too short". I tried this- Deleting the old session table. Deleting the old migration for session table. Creating a new migration using rake db: sessions:create. editing the migration manually, changing "text: data" to "longtext: data". running the migration using rake db: migrate. now when i open my application, i see this page- link text please someone help me, this is getting on my nerves!! other details of application-- In view "index.html.erb"- There is a link that makes ajax call to an action in controller, that action parses large csv file and makes an object out of it. this object is stored in session. ERROR LOG ` ArgumentError in Scoring#index Showing app/views/scoring/index.html.erb where line #4 raised: marshal data too short Extracted source (around line #4): 1: 2: 3: 4: <%= link_to_remote "get csv file", 5: :url = { :action = 'show_static_1' }, 6: :update = "static_score", 7: :complete = "$('static_score').update(request.responseText)" % Application Trace | Framework Trace | Full Trace /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in load' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in unmarshal' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:110:in data' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:292:in get_session' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1448:in silence' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:288:in get_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:168:in load_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:70:in stale_session_check!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in []' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/request_forgery_protection.rb:106:in form_authenticity_token' (eval):2:in send' (eval):2:in form_authenticity_token' app/views/scoring/index.html.erb:4:in _run_erb_app47views47scoring47index46html46erb' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in load' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in unmarshal' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:110:in data' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:292:in get_session' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1448:in silence' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:288:in get_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:168:in load_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:70:in stale_session_check!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in []' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/request_forgery_protection.rb:106:in form_authenticity_token' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:1065:in options_for_ajax' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:449:in remote_function' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:256:in link_to_remote' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:306:in with_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:30:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/template.rb:205:in render_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:265:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:348:in _render_with_layout' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:262:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1250:in render_for_file' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:945:in render_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1326:in default_render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1338:in perform_action_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:617:in call_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in perform_action_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in perform_action_without_flash' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in perform_action' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in process_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in dispatch' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in _call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in build_middleware_stack' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:122:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/static.rb:31:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in each' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/log_tailer.rb:17:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/chunked.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:64:in process' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in each' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in run' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in require' script/server:3 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in load' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:71:in unmarshal' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:110:in data' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:292:in get_session' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1448:in silence' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/session_store.rb:288:in get_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:168:in load_session' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:62:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:70:in stale_session_check!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in load!' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in []' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/request_forgery_protection.rb:106:in form_authenticity_token' (eval):2:in send' (eval):2:in form_authenticity_token' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:1065:in options_for_ajax' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:449:in remote_function' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/helpers/prototype_helper.rb:256:in link_to_remote' /app/views/scoring/index.html.erb:4:in _run_erb_app47views47scoring47index46html46erb' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:34:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:306:in with_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/renderable.rb:30:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/template.rb:205:in render_template' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:265:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:348:in _render_with_layout' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_view/base.rb:262:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1250:in render_for_file' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:945:in render_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:51:in render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1326:in default_render' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:1338:in perform_action_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:617:in call_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in perform_action_without_benchmark' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in realtime' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in ms' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in perform_action_without_rescue' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in perform_action_without_flash' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in perform_action' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in send' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in process_without_filters' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in process' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in dispatch' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in _call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in build_middleware_stack' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:122:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in cache' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in call' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/static.rb:31:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in each' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in call' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/rack/log_tailer.rb:17:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/chunked.rb:15:in call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:64:in process' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in each' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in run' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/mongrel.rb:34:in run' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 Request Parameters: None Show session dump Response Headers: {"Content-Type"="text/html", "Cache-Control"="no-cache"} `

    Read the article

  • Strange WPF ListBox Behavior

    - by uncle-harvey
    I’m trying to bind a List of items to a listbox in WPF. The items are grouped by one value and each group is to be housed in an expander. Everything works fine when I don’t use any custom styles. However, when I use custom styles (which work properly with non-grouped items and as independent controls) the binding doesn’t display any items. Below is the code I’m executing. Any ideas why the items won’t show up in the Expander? Test.xaml: <Window x:Class="Glossy.Test" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test" Height="300" Width="300"> <Window.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="..\TestStyles.xaml"/> <ResourceDictionary> <Style x:Key="ContainerStyle" TargetType="{x:Type GroupItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <Expander Header="{Binding}" IsExpanded="True"> <ItemsPresenter /> </Expander> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Window.Resources> <Grid> <ListBox x:Name="TestList"> <ListBox.GroupStyle> <GroupStyle ContainerStyle="{StaticResource ContainerStyle}"/> </ListBox.GroupStyle> </ListBox> </Grid> Test.xaml.cs: public partial class Test : Window { private List<Contact> _ContactItems; public List<Contact> ContactItems { get { return _ContactItems; } set { _ContactItems = value; } } public Test() { InitializeComponent(); ContactItems = new List<Contact>(); ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 1"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 2"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 3"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 10"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 11"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "ABC"; ContactItems.Last().Name = "Contact 12"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "RST"; ContactItems.Last().Name = "Contact 7"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "RST"; ContactItems.Last().Name = "Contact 8"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "RST"; ContactItems.Last().Name = "Contact 9"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "XYZ"; ContactItems.Last().Name = "Contact 4"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "XYZ"; ContactItems.Last().Name = "Contact 5"; ContactItems.Add(new Contact()); ContactItems.Last().CompanyName = "XYZ"; ContactItems.Last().Name = "Contact 6"; ICollectionView view = CollectionViewSource.GetDefaultView(ContactItems); view.GroupDescriptions.Add(new PropertyGroupDescription("CompanyName")); view.SortDescriptions.Add(new SortDescription("Name", ListSortDirection.Ascending)); TestList.ItemsSource = view; } } public class Contact { public string CompanyName { get; set; } public string Name { get; set; } public override string ToString() { return Name; } } TestStyles.xaml: <Style TargetType="{x:Type ListBox}"> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.VerticalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.CanContentScroll" Value="true"/> <Setter Property="MinWidth" Value="120"/> <Setter Property="MinHeight" Value="95"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBox"> <Grid Background="Black"> <Rectangle VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Fill="White"> <Rectangle.OpacityMask> <DrawingBrush> <DrawingBrush.Drawing> <GeometryDrawing Geometry="M65.5,33 L537.5,35 537.5,274.5 C536.5,81 119.5,177 66.5,92" Brush="#11444444"> <GeometryDrawing.Pen> <Pen Brush="Transparent"/> </GeometryDrawing.Pen> </GeometryDrawing> </DrawingBrush.Drawing> </DrawingBrush> </Rectangle.OpacityMask> </Rectangle> <Border Name="Border" Background="Transparent" BorderBrush="Gray" BorderThickness="1" CornerRadius="2"> <ScrollViewer Margin="0" Focusable="false"> <StackPanel Margin="2" IsItemsHost="True" /> </ScrollViewer> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsEnabled" Value="false"> <Setter TargetName="Border" Property="Background" Value="Gray" /> <Setter TargetName="Border" Property="BorderBrush" Value="DimGray" /> </Trigger> <Trigger Property="IsGrouping" Value="true"> <Setter Property="ScrollViewer.CanContentScroll" Value="false"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="Foreground" Value="Gray"/> <Setter Property="Background" Value="Transparent"/> <Setter Property="FontFamily" Value="Verdana"/> <Setter Property="HorizontalAlignment" Value="Stretch"/> <Setter Property="FontSize" Value="11"/> <Setter Property="Margin" Value="3,1,3,1"/> <Setter Property="Padding" Value="0"/> <Setter Property="FontWeight" Value="Normal"/> <Setter Property="VerticalAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBoxItem"> <Border Name="Border" Padding="2" SnapsToDevicePixels="true"> <ContentPresenter /> </Border> <ControlTemplate.Triggers> <Trigger Property="IsSelected" Value="true"> <Setter TargetName="Border" Property="Background" Value="Gray"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="White"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <ControlTemplate x:Key="ExpanderToggleButton" TargetType="ToggleButton"> <Border Name="Border" CornerRadius="2,0,0,0" Background="Transparent" BorderBrush="LightGray" BorderThickness="0,0,1,0"> <Path Name="Arrow" Fill="Blue" HorizontalAlignment="Center" VerticalAlignment="Center" Data="M 0 0 L 4 4 L 8 0 Z"/> </Border> <ControlTemplate.Triggers> <Trigger Property="ToggleButton.IsMouseOver" Value="True"> <Setter TargetName="Border" Property="Background" Value="Gray" /> </Trigger> <Trigger Property="IsPressed" Value="True"> <Setter TargetName="Border" Property="Background" Value="Black" /> </Trigger> <Trigger Property="IsChecked" Value="True"> <Setter TargetName="Arrow" Property="Data" Value="M 0 4 L 4 0 L 8 4 Z" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="Border" Property="Background" Value="DimGray" /> <Setter TargetName="Border" Property="BorderBrush" Value="DimGray" /> <Setter Property="Foreground" Value="LightGray"/> <Setter TargetName="Arrow" Property="Fill" Value="LightBlue" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> <Style TargetType="{x:Type Expander}"> <Setter Property="Foreground" Value="White"/> <Setter Property="FontFamily" Value="Verdana"/> <Setter Property="FontSize" Value="11"/> <Setter Property="FontWeight" Value="Normal"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Expander"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Name="ContentRow" Height="0"/> </Grid.RowDefinitions> <Border Name="Border" Grid.Row="0" Background="Black" BorderBrush="DimGray" BorderThickness="1" Cursor="Hand" CornerRadius="2,2,0,0" > <Grid HorizontalAlignment="Left"> <Grid.RowDefinitions> <RowDefinition Height="23"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="20" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <ToggleButton IsChecked="{Binding Path=IsExpanded,Mode=TwoWay,RelativeSource={RelativeSource TemplatedParent}}" Template="{StaticResource ExpanderToggleButton}" Background="Black" /> <Label Grid.Column="1" FontSize="14" FontWeight="Normal" Margin="0" VerticalAlignment="Top" Foreground="White" FontFamily="Verdana"> <ContentPresenter Grid.Column="1" Margin="4,3,0,0" HorizontalAlignment="Left" ContentSource="Header" RecognizesAccessKey="True" /> </Label> </Grid> </Border> <Border Name="Content" Background="Black" BorderBrush="DimGray" BorderThickness="1,0,1,1" Grid.Row="1" CornerRadius="0,0,2,2" > <Grid Background="Black"> <Rectangle VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Fill="White"> <Rectangle.OpacityMask> <DrawingBrush> <DrawingBrush.Drawing> <GeometryDrawing Geometry="M65.5,33 L537.5,35 537.5,274.5 C536.5,81 119.5,177 66.5,92" Brush="#11444444"> <GeometryDrawing.Pen> <Pen Brush="Transparent"/> </GeometryDrawing.Pen> </GeometryDrawing> </DrawingBrush.Drawing> </DrawingBrush> </Rectangle.OpacityMask> </Rectangle> <ContentPresenter Margin="4" /> </Grid> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsExpanded" Value="True"> <Setter TargetName="ContentRow" Property="Height" Value="{Binding ElementName=Content,Path=DesiredHeight}" /> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter TargetName="Border" Property="Background" Value="Gray" /> <Setter TargetName="Border" Property="BorderBrush" Value="DimGray" /> <Setter Property="Foreground" Value="White"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • Need some help on how to replay the last game of a java maze game

    - by Marty
    Hello, I am working on creating a Java maze game for a project. The maze is displayed on the console as standard output not in an applet. I have created most of hte code I need, however I am stuck at one problem and that is I need a user to be able to replay the last game i.e redraw the maze with the users moves but without any input from the user. I am not sure on what course of action to take, i was thinking about copying each users move or the position of each move into another array, as you can see i have 2 variables which hold the position of the player, plyrX and plyrY do you think copying these values into a new array after each move would solve my problem and how would i go about this? I have updated my code, apologies about the textIO.java class not being present, not sure how to resolve that exept post a link to TextIO.java [TextIO.java][1] My code below is updated with a new array of type char to hold values from the original maze (read in from text file and displayed using unicode characters) and also to new variables c_plyrX and c_plyrY which I am thinking should hold the values of plyrX and plyrY and copy them into the new array. When I try to call the replayGame(); method from the menu the maze loads for a second then the console exits so im not sure what I am doing wrong Thanks public class MazeGame { //unicode characters that will define the maze walls, //pathways, and in game characters. final static char WALL = '\u2588'; //wall final static char PATH = '\u2591'; //pathway final static char PLAYER = '\u25EF'; //player final static char ENTRANCE = 'E'; //entrance final static char EXIT = '\u2716'; //exit //declaring member variables which will hold the maze co-ordinates //X = rows, Y = columns static int entX = 0; //entrance X co-ordinate static int entY = 1; //entrance y co-ordinate static int plyrX = 0; static int plyrY = 1; static int exitX = 24; //exit X co-ordinate static int exitY = 37; //exit Y co-ordinate //static member variables which hold maze values //used so values can be accessed from different methods static int rows; //rows variable static int cols; //columns variable static char[][] maze; //defines 2 dimensional array to hold the maze //variables that hold player movement values static char dir; //direction static int spaces; //amount of spaces user can travel //variable to hold amount of moves the user has taken; static int movesTaken = 0; //new array to hold player moves for replaying game static char[][] mazeCopy; static int c_plyrX; static int c_plyrY; /** userMenu method for displaying the user menu which will provide various options for * the user to choose such as play a maze game, get instructions, etc. */ public static void userMenu(){ TextIO.putln("Maze Game"); TextIO.putln("*********"); TextIO.putln("Choose an option."); TextIO.putln(""); TextIO.putln("1. Play the Maze Game."); TextIO.putln("2. View Instructions."); TextIO.putln("3. Replay the last game."); TextIO.putln("4. Exit the Maze Game."); TextIO.putln(""); int option; //variable for holding users option TextIO.put("Type your choice: "); option = TextIO.getlnInt(); //gets users option //switch statement for processing menu options switch(option){ case 1: playMazeGame(); case 2: instructions(); case 3: if (c_plyrX == plyrX && c_plyrY == plyrY)replayGame(); else { TextIO.putln("Option not available yet, you need to play a game first."); TextIO.putln(); userMenu(); } case 4: System.exit(0); //exits the user out of the console default: TextIO.put("Option must be 1, 2, 3 or 4"); } } //end of userMenu /**main method, will call the userMenu and get the users choice and call * the relevant method to execute the users choice. */ public static void main(String[]args){ userMenu(); //calls the userMenu method } //end of main method /**instructions method, displays instructions on how to play * the game to the user/ */ public static void instructions(){ TextIO.putln("To beat the Maze Game you have to move your character"); TextIO.putln("through the maze and reach the exit in as few moves as possible."); TextIO.putln(""); TextIO.putln("Your characer is displayed as a " + PLAYER); TextIO.putln("The maze exit is displayed as a " + EXIT); TextIO.putln("Reach the exit and you have won escaped the maze."); TextIO.putln("To control your character type the direction you want to go"); TextIO.putln("and how many spaces you want to move"); TextIO.putln("for example 'D3' will move your character"); TextIO.putln("down 3 spaces."); TextIO.putln("Remember you can't walk through walls!"); boolean insOption; //boolean variable TextIO.putln(""); TextIO.put("Do you want to play the Maze Game now? (Y or N) "); insOption = TextIO.getlnBoolean(); if (insOption == true)playMazeGame(); else userMenu(); } //end of instructions method /**playMazeGame method, calls the loadMaze method and the charMove method * to start playing the Maze Game. */ public static void playMazeGame(){ loadMaze(); plyrMoves(); } //end of playMazeGame method /**loadMaze method, loads the 39x25 maze from the MazeGame.txt text file * and inserts values from the text file into the maze array and * displays the maze on screen using the unicode block characters. * plyrX and plyrY variables are set at their staring co ordinates so that when * a game is completed and the user selects to play a new game * the player character will always be at position 01. */ public static void loadMaze(){ plyrX = 0; plyrY = 1; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end of loadMaze method /**redrawMaze method, method for redrawing the maze after each move. * */ public static void redrawMaze(){ TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end redrawMaze method /**replay game method * */ public static void replayGame(){ c_plyrX = plyrX; c_plyrY = plyrY; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions mazeCopy = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ mazeCopy[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == c_plyrX && j == c_plyrY){ c_plyrX = i; c_plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (mazeCopy[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (mazeCopy[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (mazeCopy[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (mazeCopy[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end replayGame method /**plyrMoves method, method for moving the players character * around the maze. */ public static void plyrMoves(){ int nplyrX = plyrX; int nplyrY = plyrY; int pMoves; direction(); //UP if (dir == 'U' || dir == 'u'){ nplyrX = plyrX; nplyrY = plyrY; for(pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrX =plyrX + 1; } else { plyrX = plyrX-spaces; c_plyrX = plyrX; movesTaken++; } } }//end UP if //DOWN if (dir == 'D' || dir == 'd'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves ++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrX = plyrX+1; } else{ plyrX = plyrX+spaces; c_plyrX = plyrX; movesTaken++; } } } //end DOWN if //LEFT if (dir == 'L' || dir =='l'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrY = plyrY + 1; } else{ plyrY = plyrY-spaces; c_plyrY = plyrY; movesTaken++; } } } //end LEFT if //RIGHT if (dir == 'R' || dir == 'r'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrY += 1; } else{ plyrY = plyrY+spaces; c_plyrY = plyrY; movesTaken++; } } } //end RIGHT if //prints message if player escapes from the maze. if (maze[plyrX][plyrY] == '3'){ TextIO.putln("****Congratulations****"); TextIO.putln(); TextIO.putln("You have escaped from the maze."); TextIO.putln(); userMenu(); } else{ movesTaken++; redrawMaze(); plyrMoves(); } } //end of plyrMoves method /**direction, method * */ public static char direction(){ TextIO.putln("Enter the direction you wish to move in and the distance"); TextIO.putln("i.e D3 = move down 3 spaces"); TextIO.putln("U - Up, D - Down, L - Left, R - Right: "); dir = TextIO.getChar(); if (dir =='U' || dir == 'D' || dir == 'L' || dir == 'R' || dir == 'u' || dir == 'd' || dir == 'l' || dir == 'r'){ spacesMoved(); } else{ loadMaze(); TextIO.putln("Invalid direction!"); TextIO.put("Direction must be one of U, D, L or R"); direction(); } return dir; //returns the value of dir (direction) } //end direction method /**spaces method, gets the amount of spaces the user wants to move * */ public static int spacesMoved(){ TextIO.putln(" "); spaces = TextIO.getlnInt(); if (spaces <= 0){ loadMaze(); TextIO.put("Invalid amount of spaces, try again"); spacesMoved(); } return spaces; } //end spacesMoved method } //end of MazeGame class

    Read the article

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Remove Office 2010 Beta and Reinstall Office 2007

    - by Matthew Guay
    Have you tried out the Office 2010 beta, but want to go back to Office 2007?  Here’s a step-by-step tutorial on how to remove your Office 2010 beta and reinstall your Office 2007. The Office 2010 beta will expire on October 31, 2010, at which time you may see a dialog like the one below.  At that time, you will need to either upgrade to the final release of Office 2010, or reinstall your previous version of Office. Our computer was running the Office 2010 Home and Business Click to Run beta, and after uninstalling it we reinstalled Office 2007 Home and Student.  This was a Windows Vista computer, but the process will be exactly the same on Windows XP, Vista, or Windows 7.  Additionally, the process to reinstall Office 2007 will be exactly the same regardless of the edition of Office 2007 you’re using. However, please note that if you are running a different edition of Office 2010, especially the 64 bit version, the process may be slightly different.  We will cover this scenario in another article. Remove Office 2010 Click to Run Beta: To remove Office 2010 Click to Run Beta, open Control Panel and select Uninstall a Program. If your computer is running Windows 7, enter “Uninstall a program” in your Start menu search. Scroll down, select “Microsoft Office Click-to-Run 2010 (Beta)”, and click the Uninstall button on the toolbar.  Note that there will be two entries for Office, so make sure to select the “Click-to-Run” entry. This will automatically remove all of Office 2010 and its components.  Click Yes to confirm you want to remove it. Office 2010 beta uninstalled fairly quickly, and a reboot will be required.  Once your computer is rebooted, Office 2010 will be entirely removed. Reinstall Office 2007 Now, you’re to the easy part.  Simply insert your Office 2007 CD, and it should automatically startup the setup.  If not, open Computer and double-click on your CD drive.   Now, double-click on setup.exe to start the installation. Enter your product key, and click Continue…   Click Install Now, or click Customize if you want to change the default installation settings. Wait while Office 2007 installs…it takes around 15 to 20 minutes in our experience.  Once it’s finished  close the installer. Now, open one of the Office applications.  A popup will open asking you to activate Office.  Make sure you’re connected to the internet, and click next; otherwise, you can select to activate over the phone if you do not have internet access. This should only take a minute, and Office 2007 will be activated and ready to run. Everything should work just as it did before you installed Office 2010.  Enjoy! Office Updates Make sure to install the latest updates for Office 2007, as these are not included in your disk.  Check Windows Update (search for Windows Update in the Start menu search), and install all of the available updates for Office 2007, including Service Pack 2. Conclusion This is a great way to keep using Office even if you don’t decide to purchase Office 2010 after it is released.  Additionally, if you’re were using another version of Office, such as Office 2003, then reinstall it as normal after following the steps to remove Office 2010. Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteDetect and Repair Applications In Microsoft Office 2007Save and Restore Your Microsoft Office SettingsDisable Office 2010 Beta Send-a-Smile from StartupHow to See the About Dialog and Version Information in Office 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • The Complementary Roles of PLM and PIM

    - by Ulf Köster
    Oracle Product Value Chain Solutions (aka Enterprise PLM Solutions) are a comprehensive set of product management solutions that work together to provide Oracle customers with a broad array of capabilities to manage all aspects of product life: innovation, design, launch, and supply chain / commercialization processes beyond the capabilities and boundaries of traditional engineering-focused Product Lifecycle Management applications. They support companies with an integrated managed view across the product value chain: From Lab to Launch, From Farm to Fork, From Concept to Product to Customer, From Product Innovation to Product Design and Product Commercialization. Product Lifecycle Management (PLM) represents a broad suite of software solutions to improve product-oriented business processes and data. PLM success stories prove that PLM helps companies improve time to market, increase product-related revenue, reduce product costs, reduce internal costs and improve product quality. As a maturing suite of enterprise solutions, PLM is still evolving to realize the promise it can provide across all facets of a business and all phases of the product lifecycle. The vision for PLM includes everything from gathering early requirements for a product through multiple stages of the product lifecycle from product design, through commercialization and eventual product retirement or replacement. In discrete or process industries, PLM is typically more focused on Product Definition as items with respect to the technical view of a material or part, including specifications, bills of material and manufacturing data. With Agile PLM, this is specifically related to capabilities addressing Product Collaboration, Governance and Compliance, Product Quality Management, Product Cost Management and Engineering Collaboration. PLM today is mainly addressing key requirements in the early product lifecycle, in engineering changes or in the “innovation cycle”, and primarily adds value related to product design, development, launch and engineering change process. In short, PLM is the master for Product Definition, wherever manufacturing takes place. Product Information Management (PIM) is a product suite that has evolved in parallel to PLM. Product Information Management (PIM) can extend the value of PLM implementations by providing complementary tools and capabilities. More relevant in the area of Product Commercialization, the vision for PIM is to manage product information throughout an enterprise and supply chain to improve product-related knowledge management, information sharing and synchronization from multiple data sources. PIM success stories have shown the ability to provide multiple benefits, with particular emphasis on reducing information complexity and information management costs. Product Information in PIM is typically treated as the commercial view of a material or part, including sales and marketing information and categorization. PIM collects information from multiple manufacturing sites and multiple suppliers into its repository, but also provides integration tools to push the information back out to the other systems, serving as an active central repository with the aim to provide a holistic view on any product sold by a company (hence the name “Product Hub”). In short, PIM is the master of commercial Product Information. So PIM is quickly becoming mandatory because of its value in optimizing multichannel selling processes and relationships with customers, as you can see from the following table: Viewpoint PLM Current State PIM Key Benefits PIM adds to PLM Product Lifecycle Primarily R&D Front end Innovation Cycle Change process Primarily commercial / transactional state of lifecycle Provides a seamless information flow from design and manufacturing through the ultimate selling and servicing of products Data Primarily focused on “item” vs. “product” data Product structures Specifications Technical information Repository for all product information. Reaches out to entire enterprise and its various silos of product information and descriptions Provides a “trusted source” of accurate product information to the internal organization and trading partners Data Lifecycle Repository for all design iterations Historical information Released, current information, with version management and time stamping Provides a single location to track and audit historical product information Communication PLM release finished product to ERP PLM is the master for Product Definition Captures information from disparate sources, including in-house data stores Recognizes the reality of today’s data “mess” across information silos Provides the ability to package product information to its audience in the desired, relevant format to meet their exacting business requirements Departmental R&D Manufacturing Quality Compliance Procurement Strategic Marketing Focus on Marketing and Sales Gathering information from other Departments, multiple sites, multiple suppliers A singular enterprise solution that leverages existing information silos and data stores Supply Chain Multi-site internal collaboration Supplier collaboration Customer collaboration Works with customers, exchanges / data pools, and trading partners to provide relevant product information packaged the way the customer desires Provides ability to provide trading partners and internal customers with information in a manner they desire, continuously Tools Data Management Collaboration Innovation Management Cleansing Synchronization Hub functions Consistent, clean and complete commercial product information The goals of both PLM and PIM, put simply, are to help companies make more profit from their products. PLM and PIM solutions can be easily added as they share some of the same goals, while coming from two different perspectives: the definition of the product and the commercialization of the product. Both can serve as a form of product “system of record”, but take different approaches to delivering value. Oracle Product Value Chain solutions offer rich new strategies for executives to collectively leverage Agile PLM, Product Data Hub, together with Enterprise Data Quality for Products, and other industry leading Oracle applications to achieve further incremental value, like Oracle Innovation Management. This is unique on the market today.

    Read the article

  • repair broken packages-"dpkg: error: conflicting actions -f (--field) and -r (--remove)"

    - by yinon
    Ubuntu 12.04 LTS. if more information will be needed, tell me and'll give. the main problem is: tzach@tzach-pc:~$ sudo apt-get install docky [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done docky is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tzach@tzach-pc:~$ and also: tzach@tzach-pc:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. **The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using ******* so we tryied the guide here in messege #9: http://ubuntuforums.org/showthread.php?t=947124 we run all the first 4 commands and the last one-"sudo apt-get autoremove" gave us: tzach@tzach-pc:~$ sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: **ca-certificates-java** : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless **openjdk-7-jre-lib** : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using -f. so we run the last command twice: sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java and sudo dpkg --remove -force --force-remove-reinstreq openjdk-7-jre-lib but both of them gives: tzach@tzach-pc:~$ sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java [sudo] password for tzach: dpkg: error: conflicting actions -f (--field) and -r (--remove) Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! EDIT FOR green7-output of "sudo apt-get -f install": tzach@tzach-pc:~$ sudo apt-get -f install [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java Suggested packages: default-jre equivs sun-java6-fonts ttf-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following packages will be REMOVED: ttf-mscorefonts-installer The following NEW packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java 0 upgraded, 5 newly installed, 1 to remove and 355 not upgraded. 5 not fully installed or removed. Need to get 0 B/29.6 MB of archives. After this operation, 88.5 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: warning: there's no installed package matching ttf-mscorefonts-installer:amd64 Setting up tzdata (2012e-0ubuntu0.12.04) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing tzdata (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: tzdata E: Sub-process /usr/bin/dpkg returned an error code (1) EDIT2 FOR green7: tzach@tzach-pc:~$ sudo apt-get remove --purge tzdata [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless libc6 : Depends: tzdata but it is not going to be installed libc6:i386 : Depends: tzdata:i386 libical0 : Depends: tzdata but it is not going to be installed openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed python-dateutil : Depends: tzdata but it is not going to be installed ubuntu-minimal : Depends: tzdata but it is not going to be installed util-linux : Depends: tzdata (>= 2006c-2) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). EDIT3 FOR green7: tzach@tzach-pc:~$ sudo apt-get install openjdk-7-jre-headless [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-7-jre-headless : Depends: tzdata-java but it is not going to be installed Depends: java-common (>= 0.28) but it is not going to be installed Recommends: icedtea-7-jre-cacao (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed Recommends: icedtea-7-jre-jamvm (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). some things in the text also supposed to be bolded. but not critic (: Thanks for the editing! Thanks a lot for your assistance.

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >