Search Results

Search found 18139 results on 726 pages for 'private cloud'.

Page 698/726 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • Windows 7 client can't connect to CentOS PPTP VPN

    - by Chris
    Have a Macintosh (10.8.2) that connects just fine to a CentOS 6.0 virtual private server (OpenVZ, with PPP added by the host) via PPTP. A Windows 7 Home Premium client (virtualized in Sun's Virtual Box), on the same computer, using the same Ethernet connection, cannot connect to the Linux VPN server. I have iptables disabled (for testing) on the Linux box. I have the Windows firewall turned off. /var/log/messages looks like this, for a Windows connection: Oct 12 18:44:30 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:44:30 production pptpd[1880]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:44:30 production pppd[1881]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:44:30 production pppd[1881]: pptpd-logwtmp: $Version$ Oct 12 18:44:30 production pppd[1881]: pppd options in effect: Oct 12 18:44:30 production pppd[1881]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: 115200#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: local#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pppd 2.4.5 started by root, uid 0 Oct 12 18:44:30 production pppd[1881]: Using interface ppp0 Oct 12 18:44:30 production pppd[1881]: Connect: ppp0 <--> /dev/pts/1 (At this point the Windows machine displays a dialog, reading: "Verifying user name and password...") Oct 12 18:45:00 production pppd[1881]: LCP: timeout sending Config-Requests Oct 12 18:45:00 production pppd[1881]: Connection terminated. Oct 12 18:45:00 production pppd[1881]: Modem hangup Oct 12 18:45:00 production pppd[1881]: Exit. Oct 12 18:45:00 production pptpd[1880]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Oct 12 18:45:00 production pptpd[1880]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Oct 12 18:45:00 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection finished The Macintosh connecting looks like this in /var/log/messages: Oct 12 18:50:49 production pptpd[1920]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:50:49 production pptpd[1920]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:50:49 production pppd[1921]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:50:49 production pppd[1921]: pptpd-logwtmp: $Version$ Oct 12 18:50:49 production pppd[1921]: pppd options in effect: Oct 12 18:50:49 production pppd[1921]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: 115200#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: local#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pppd 2.4.5 started by root, uid 0 Oct 12 18:50:49 production pppd[1921]: Using interface ppp0 Oct 12 18:50:49 production pppd[1921]: Connect: ppp0 <--> /dev/pts/1 Oct 12 18:50:52 production pppd[1921]: MPPE 128-bit stateless compression enabled Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'IPv6 Control Protocol' (0x8057) received Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'Apple Client Server Protocol Control' (0x8235) received Oct 12 18:50:52 production pppd[1921]: Cannot determine ethernet address for proxy ARP Oct 12 18:50:52 production pppd[1921]: local IP address 192.168.97.1 Oct 12 18:50:52 production pppd[1921]: remote IP address 192.168.97.10 Oct 12 18:50:52 production pppd[1921]: pptpd-logwtmp.so ip-up ppp0 chris 66.104.246.168 I'm baffled...

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

  • postfix 5.7.1 Relay access denied when sending mail with cron

    - by zensys
    Reluctant to ask because there is so much here about 'postfix relay access denied' but I cannot find my case: I use php (Zend Framework) to send emails outside my network using the Google mail server because I could not send mail outside my server (user: web). However when I sent out an email via cron (user: root, I believe), still using ZF, using the same mail config/credentials, I get the message: '5.7.1 Relay access denied' I guess I need to know one of two things: 1. How can I use the google smtp server from cron 2. What do I need to change in my config to send mail using my own server instead of google Though the answer to 2. is the more structural solution I assume, I am quite happy with an answer to 1. as well because I think Google is better at server maintaince (security/spam) than I am. Below my ZF application.ini mail section, main.cf and master.cf: application.ini: resources.mail.transport.type = smtp resources.mail.transport.auth = login resources.mail.transport.host = "smtp.gmail.com" resources.mail.transport.ssl = tls resources.mail.transport.port = 587 resources.mail.transport.username = [email protected] resources.mail.transport.password = xxxxxxx resources.mail.defaultFrom.email = [email protected] resources.mail.defaultFrom.name = "my company" main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.second-start.nl mydomain = second-start.nl alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes # see under Spam smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps virtual_transport = dovecot dovecot_destination_recipient_limit = 1 # Spam disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, check_helo_access hash:/etc/postfix/helo_access, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_mynetworks, reject_non_fqdn_hostname, reject_rbl_client sbl.spamhaus.org, reject_rbl_client zen.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 master.cf: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • Ubuntu 11.10 - Everytime i am trying to connect to my box using SSH, its failing not connecting

    - by YumYumYum
    From any other PC doing SSH to my Ubuntu 11.10,is failing. Even the SSH is running: Other PC: retrying over and over $ ping 192.168.0.128 PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data. From 192.168.0.226 icmp_seq=1 Destination Host Unreachable From 192.168.0.226 icmp_seq=2 Destination Host Unreachable From 192.168.0.226 icmp_seq=3 Destination Host Unreachable From 192.168.0.226 icmp_seq=4 Destination Host Unreachable $ sudo service iptables stop Stopping iptables (via systemctl): [ OK ] $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] Connection closed by 192.168.0.128 $ ssh [email protected] [email protected]'s password: Connection closed by UNKNOWN $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host Follow up: -- checked cable -- using cable tester and other detectors -- no problem found in cable -- used random 10 cables -- adapter is not broken -- checked it using circuit tester by opening the system (card is new so its not network adapter card problem) -- leds are OK showing -- used LiveCD and did same ping test was having same problem -- disabled ipv6 100% to make sure its not the cause -- disabled iptables 100% so its also not the issue -- some more info $ sudo killall dnsmasq -- did not solved the problem -- -- like many other Q/A was suggesting this same --- $ iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 $ ssh -vvv [email protected] OpenSSH_5.6p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.128 [192.168.0.128] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/sun/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/sun/.ssh/id_rsa type 1 debug1: identity file /home/sun/.ssh/id_rsa-cert type -1 debug1: identity file /home/sun/.ssh/id_dsa type -1 debug1: identity file /home/sun/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 118/256 debug2: bits set: 539/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: match line 139 debug1: Host '192.168.0.128' is known and matches the RSA host key. debug1: Found key in /home/sun/.ssh/known_hosts:139 debug2: bits set: 544/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sun/.ssh/id_rsa (0x213db960) debug2: key: /home/sun/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/sun/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/sun/.ssh/id_dsa debug3: no such identity: /home/sun/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 60 padlen 4 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.0.128 ([192.168.0.128]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE_PID debug3: Ignored env IMSETTINGS_INTEGRATE_DESKTOP debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env HARDWARE_PLATFORM debug3: Ignored env SHELL debug3: Ignored env DESKTOP_STARTUP_ID debug3: Ignored env HISTSIZE debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GJS_DEBUG_OUTPUT debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env QTDIR debug3: Ignored env QTINC debug3: Ignored env GJS_DEBUG_TOPICS debug3: Ignored env IMSETTINGS_MODULE debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE debug3: Ignored env PATH debug3: Ignored env MAIL debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env PWD debug1: Sending env XMODIFIERS = @im=none debug2: channel 0: request env confirm 0 debug1: Sending env LANG = en_US.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env KDE_IS_PRELINKED debug3: Ignored env GDM_LANG debug3: Ignored env KDEDIRS debug3: Ignored env GDMSESSION debug3: Ignored env SSH_ASKPASS debug3: Ignored env HISTCONTROL debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GDL_PATH debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env QTLIB debug3: Ignored env CVS_RSH debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env G_BROKEN_FILENAMES debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 297 packages can be updated. 92 updates are security updates. New release '12.04 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Fri Jun 8 07:45:15 2012 from 192.168.0.226 sun@SystemAX51:~$ ping 19<--------Lost connection again-------------- Tail follow: -- dmesg is showing a very abnormal logs, like Ubuntu is automatically bringing the eth0 up, where eth0 is getting also auto down. [ 2025.897511] r8169 0000:02:00.0: eth0: link up [ 2029.347649] r8169 0000:02:00.0: eth0: link up [ 2030.775556] r8169 0000:02:00.0: eth0: link up [ 2038.242203] r8169 0000:02:00.0: eth0: link up [ 2057.267801] r8169 0000:02:00.0: eth0: link up [ 2062.871770] r8169 0000:02:00.0: eth0: link up [ 2082.479712] r8169 0000:02:00.0: eth0: link up [ 2285.630797] r8169 0000:02:00.0: eth0: link up [ 2308.417640] r8169 0000:02:00.0: eth0: link up [ 2480.948290] r8169 0000:02:00.0: eth0: link up [ 2824.884798] r8169 0000:02:00.0: eth0: link up [ 3030.022183] r8169 0000:02:00.0: eth0: link up [ 3306.587353] r8169 0000:02:00.0: eth0: link up [ 3523.566881] r8169 0000:02:00.0: eth0: link up [ 3619.839585] r8169 0000:02:00.0: eth0: link up [ 3682.154393] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 3899.866854] r8169 0000:02:00.0: eth0: link up [ 4723.978269] r8169 0000:02:00.0: eth0: link up [ 4807.415682] r8169 0000:02:00.0: eth0: link up [ 5101.865686] r8169 0000:02:00.0: eth0: link up How do i fix it? -- http://ubuntuforums.org/showthread.php?t=1959794 -- apt-get install openipml openhpi-plugin-ipml

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Everytime i am trying to connect to my box using SSH, its failing not connecting

    - by YumYumYum
    From any other PC doing SSH to my Ubuntu 11.10,is failing. My network setup: Telenet ISP (Belgium) Fiber cable < RJ45 cable straight to Ubuntu PC Even the SSH is running: Other PC: retrying over and over $ ping 192.168.0.128 PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data. From 192.168.0.226 icmp_seq=1 Destination Host Unreachable From 192.168.0.226 icmp_seq=2 Destination Host Unreachable From 192.168.0.226 icmp_seq=3 Destination Host Unreachable From 192.168.0.226 icmp_seq=4 Destination Host Unreachable $ sudo service iptables stop Stopping iptables (via systemctl): [ OK ] $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] Connection closed by 192.168.0.128 $ ssh [email protected] [email protected]'s password: Connection closed by UNKNOWN $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host Follow up: -- checked cable -- using cable tester and other detectors -- no problem found in cable -- used random 10 cables -- adapter is not broken -- checked it using circuit tester by opening the system (card is new so its not network adapter card problem) -- leds are OK showing -- used LiveCD and did same ping test was having same problem -- disabled ipv6 100% to make sure its not the cause -- disabled iptables 100% so its also not the issue -- some more info $ nmap 192.168.0.128 Starting Nmap 5.50 ( http://nmap.org ) at 2012-06-08 19:11 CEST Nmap scan report for 192.168.0.128 Host is up (0.00045s latency). All 1000 scanned ports on 192.168.0.128 are closed (842) or filtered (158) Nmap done: 1 IP address (1 host up) scanned in 6.86 seconds ubuntu@ubuntu:~$ netstat -aunt | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 1 192.168.0.128:58616 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:56749 199.7.57.72:80 ESTABLISHED tcp 0 1 192.168.0.128:58614 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:49916 173.194.65.113:443 ESTABLISHED tcp 0 1 192.168.0.128:45699 64.34.119.101:80 SYN_SENT tcp 0 0 192.168.0.128:48404 64.34.119.12:80 ESTABLISHED tcp 0 0 192.168.0.128:54161 67.201.31.70:80 TIME_WAIT $ sudo killall dnsmasq -- did not solved the problem -- -- like many other Q/A was suggesting this same --- $ iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 $ ssh -vvv [email protected] OpenSSH_5.6p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.128 [192.168.0.128] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/sun/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/sun/.ssh/id_rsa type 1 debug1: identity file /home/sun/.ssh/id_rsa-cert type -1 debug1: identity file /home/sun/.ssh/id_dsa type -1 debug1: identity file /home/sun/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 118/256 debug2: bits set: 539/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: match line 139 debug1: Host '192.168.0.128' is known and matches the RSA host key. debug1: Found key in /home/sun/.ssh/known_hosts:139 debug2: bits set: 544/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sun/.ssh/id_rsa (0x213db960) debug2: key: /home/sun/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/sun/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/sun/.ssh/id_dsa debug3: no such identity: /home/sun/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 60 padlen 4 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.0.128 ([192.168.0.128]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE_PID debug3: Ignored env IMSETTINGS_INTEGRATE_DESKTOP debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env HARDWARE_PLATFORM debug3: Ignored env SHELL debug3: Ignored env DESKTOP_STARTUP_ID debug3: Ignored env HISTSIZE debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GJS_DEBUG_OUTPUT debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env QTDIR debug3: Ignored env QTINC debug3: Ignored env GJS_DEBUG_TOPICS debug3: Ignored env IMSETTINGS_MODULE debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE debug3: Ignored env PATH debug3: Ignored env MAIL debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env PWD debug1: Sending env XMODIFIERS = @im=none debug2: channel 0: request env confirm 0 debug1: Sending env LANG = en_US.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env KDE_IS_PRELINKED debug3: Ignored env GDM_LANG debug3: Ignored env KDEDIRS debug3: Ignored env GDMSESSION debug3: Ignored env SSH_ASKPASS debug3: Ignored env HISTCONTROL debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GDL_PATH debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env QTLIB debug3: Ignored env CVS_RSH debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env G_BROKEN_FILENAMES debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 297 packages can be updated. 92 updates are security updates. New release '12.04 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Fri Jun 8 07:45:15 2012 from 192.168.0.226 sun@SystemAX51:~$ ping 19<--------Lost connection again-------------- Tail follow: -- dmesg is showing a very abnormal logs, like Ubuntu is automatically bringing the eth0 up, where eth0 is getting also auto down. [ 2025.897511] r8169 0000:02:00.0: eth0: link up [ 2029.347649] r8169 0000:02:00.0: eth0: link up [ 2030.775556] r8169 0000:02:00.0: eth0: link up [ 2038.242203] r8169 0000:02:00.0: eth0: link up [ 2057.267801] r8169 0000:02:00.0: eth0: link up [ 2062.871770] r8169 0000:02:00.0: eth0: link up [ 2082.479712] r8169 0000:02:00.0: eth0: link up [ 2285.630797] r8169 0000:02:00.0: eth0: link up [ 2308.417640] r8169 0000:02:00.0: eth0: link up [ 2480.948290] r8169 0000:02:00.0: eth0: link up [ 2824.884798] r8169 0000:02:00.0: eth0: link up [ 3030.022183] r8169 0000:02:00.0: eth0: link up [ 3306.587353] r8169 0000:02:00.0: eth0: link up [ 3523.566881] r8169 0000:02:00.0: eth0: link up [ 3619.839585] r8169 0000:02:00.0: eth0: link up [ 3682.154393] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 3899.866854] r8169 0000:02:00.0: eth0: link up [ 4723.978269] r8169 0000:02:00.0: eth0: link up [ 4807.415682] r8169 0000:02:00.0: eth0: link up [ 5101.865686] r8169 0000:02:00.0: eth0: link up How do i fix it? -- http://ubuntuforums.org/showthread.php?t=1959794 $ apt-get install openipml openhpi-plugin-ipml $ openipmish > help redisp_cmd on|off > redisp_cmd on redisp set Final follow up: Step 1: BUG for network card driver r8169 Step 2: get the latest build version http://www.realtek.com/downloads/downloadsView.aspx?Langid=1&PNid=4&PFid=4&Level=5&Conn=4&DownTypeID=3&GetDown=false&Downloads=true#RTL8110SC(L) Step 3: build / make $ cd /var/tmp/driver $ tar xvfj r8169.tar.bz2 $ make clean modules && make install $ rmmod r8169 $ depmod $ cp src/r8169.ko /lib/modules/3.xxxx/kernel/drivers/net/r8169.ko $ modprobe r8169 $ update-initramfs -u $ init 6 Voila!!

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • Slick2d/Nifty-gui input

    - by eerongal
    I'm trying to get input from slick2d into nifty gui. Ive searched online, and I've seen a few examples, but I can't seem to get it working right. i've tried the example on here but I can't seem to get everything working. I'm not entirely sure what I'm doing wrong. I've also looked at examples using the JMonkeyEngine to help point me in the right direction, but still having issues with input. I can get everything else working like i need. Here's the code for my element controller: package gui; import java.util.Properties; import de.lessvoid.nifty.Nifty; import de.lessvoid.nifty.controls.Controller; import de.lessvoid.nifty.elements.Element; import de.lessvoid.nifty.input.NiftyInputEvent; import de.lessvoid.nifty.screen.Screen; import de.lessvoid.xml.xpp3.Attributes; public class BaseElementController implements Controller { private Element element; public void bind(Nifty arg0, Screen arg1, Element arg2, Properties arg3, Attributes arg4) { this.element = element; } public void init(Properties arg0, Attributes arg1) { // TODO Auto-generated method stub } public boolean inputEvent(NiftyInputEvent arg0) { // TODO Auto-generated method stub return false; } public void onFocus(boolean arg0) { // TODO Auto-generated method stub } public void onStartScreen() { // TODO Auto-generated method stub } public void test() { System.out.println("test"); } public void bam() { System.out.println("bam"); } } Here's my XML file: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <nifty> <useStyles filename="nifty-default-styles.xml"/> <useControls filename="nifty-default-controls.xml"/> <screen id="screen2" controller="gui.BaseScreenController"> <layer backgroundColor="#fff0" childLayout="absolute" id="layer4" controller="gui.BaseElementController"> <panel childLayout="center" height="30%" id="panel1" style="nifty-panel-simple" width="50%" x="282" y="334" controller="gui.BaseElementController"> <control id="checkbox1" name="checkbox"/> <control childLayout="center" id="button2" label="button2" name="button" x="381" y="224" visibleToMouse="true" controller="gui.BaseElementController"> <interact onClick="bam()"/> </control> </panel> <text text="${CALL.getPlayerName()}" style="nifty-label" width="100%" height="100%" x="0" y="10" /> </layer> </screen> </nifty> Here's how I'm trying to bind the controller: public void init(GameContainer gc) throws SlickException { Input input = gc.getInput(); inputSystem = new PlainSlickInputSystem(); inputSystem.setInput(input); gui = new Gui(); gui.init(gc, inputSystem, "gui/tset.xml", "screen2"); input.removeListener(this); input.removeListener(inputSystem); input.addListener(inputSystem); } Essentially, all that happens right now is the screen loads up and displays, and it grabs the variable correctly in the label, but none of the input seems to be getting forwarded to Nifty from slick. I assume there's something I'm missing, but I can't seem to figure out what that is. In so far as what I have tried, I attempted to define a custom input listener to pick up events and assign that to my game in order to pick up input, which did not work, so i dropped that implementation, at current i'm trying to take the default inputs and bind then with a PlainSlickInputSystem and assigning that to the input (as shown in the first example link). On code execution, all the code is hit, and i've put several system.out.println's to get ouput of what is happening (the code above has been cleaned for presentation), and i even see the elements getting bound to the controller, yet it doesn't pick up controller events. As far as EXACTLY what's wrong, that I don't know, because I've followed all implementations i can find of this, and none of them seem to do anything it's like the input is just getting thrown out. None of the objects from niftyGui appear to be recognizing any input. Here is the binding from my objects at run time: ******INITIALIZED SCREEN: de.lessvoid.nifty.screen.Screen@4a1ab1c1 ******INITIALIZED ELEMENT: button2 (de.lessvoid.nifty.elements.Element@1e8c1be9) ******INITIALIZED ELEMENT: focusable => true, width => 100px {nifty-button#panel}, backgroundImage => button/button.png {nifty-button#panel}, label => button2, paddingLeft => 7px {nifty-button#panel}, imageMode => sprite-resize:100,23,0,2,96,2,2,2,96,2,19,2,96,2,2 {nifty-button#panel}, paddingRight => 7px {nifty-button#panel}, id => button2, visibleToMouse => true, height => 23px {nifty-button#panel}, style => nifty-button, name => button, inputMapping => de.lessvoid.nifty.input.mapping.MenuInputMapping, childLayout => center, controller => gui.BaseElementController, y => 224, x => 381 ******INITIALIZED SCREEN: de.lessvoid.nifty.screen.Screen@4a1ab1c1 ******INITIALIZED ELEMENT: panel1 (de.lessvoid.nifty.elements.Element@373ec894) ******INITIALIZED ELEMENT: id => panel1, height => 30%, style => nifty-panel-simple, width => 50%, backgroundImage => panel/nifty-panel-simple.png {nifty-panel-simple}, controller => gui.BaseElementController, childLayout => center, padding => 5px {nifty-panel-simple}, imageMode => resize:9,2,9,9,9,2,9,2,9,2,9,9 {nifty-panel-simple}, y => 334, x => 282 ******INITIALIZED SCREEN: de.lessvoid.nifty.screen.Screen@4a1ab1c1 ******INITIALIZED ELEMENT: layer4 (de.lessvoid.nifty.elements.Element@6427d489) ******INITIALIZED ELEMENT: id => layer4, backgroundColor => #fff0, controller => gui.BaseElementController, childLayout => absolute the button2 object is getting bound to my BaseElementController, but i can't seem to get it into the defined "onClick" call.

    Read the article

  • CodePlex Daily Summary for Tuesday, June 18, 2013

    CodePlex Daily Summary for Tuesday, June 18, 2013Popular ReleasesCODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...LINQ Extensions Library: 1.0.4.2: New to release 1.0.4.2 Custom sorting extensions that perform up to 50% better than LINQ OrderBy, ThenBy extensions... Extensions allow for fine tuning of the sort by controlling the algorithm each sort uses.ExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.94: add dstLine and dstCol attributes to the -Analyze output in XML mode. un-combine leftover comma-separates expression statements after optimizations are complete so downstream tools don't stack-overflow on really deep comma trees. add support for using a single source map generator instance with multiple runs of MinifyJavaScript, assuming that the results are concatenated to the same output file.Kooboo CMS: Kooboo CMS 4.1.1: The stable release of Kooboo CMS 4.1.0 with fixed the following issues: https://github.com/Kooboo/CMS/issues/1 https://github.com/Kooboo/CMS/issues/11 https://github.com/Kooboo/CMS/issues/13 https://github.com/Kooboo/CMS/issues/15 https://github.com/Kooboo/CMS/issues/19 https://github.com/Kooboo/CMS/issues/20 https://github.com/Kooboo/CMS/issues/24 https://github.com/Kooboo/CMS/issues/43 https://github.com/Kooboo/CMS/issues/45 https://github.com/Kooboo/CMS/issues/46 https://github....VidCoder: 1.5.0 Beta: The betas have started up again! If you were previously on the beta track you will need to install this to get back on it. That's because you can now run both the Beta and Stable version of VidCoder side-by-side! Note that the OpenCL and Intel QuickSync changes being tested by HandBrake are not in the betas yet. They will appear when HandBrake integrates them into the main branch. Updated HandBrake core to SVN 5590. This adds a new FDK AAC encoder. The FAAC encoder has been removed and now...Employee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...DirectXTex texture processing library: June 2013: June 15, 2013 Custom filtering implementation for Resize & GenerateMipMaps(3D) - Point, Box, Linear, Cubic, and Triangle TEX_FILTER_TRIANGLE finite low-pass triangle filter TEX_FILTER_WRAP, TEX_FILTER_MIRROR texture semantics for custom filtering TEX_FILTER_BOX alias for TEX_FILTER_FANT WIC Ordered and error diffusion dithering for non-WIC conversion sRGB gamma correct custom filtering and conversion DDS_FLAGS_EXPAND_LUMINANCE - Reader conversion option for L8, L16, and A8L8 legacy ...WPF Application Framework (WAF): WPF Application Framework (WAF) 3.0.0.440: Version: 3.0.0.440 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Please build the whole solution before you start one of the sample applications. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 2012) Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Samples: Use ValueConverters via StaticResource instead of x:Static. Other Downloads Downloads OverviewBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busFree language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.Pokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Modern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadNew ProjectsAux Browser: This browser is secured by system level sandbox technology, and it helps you get where you want to go in the shortest possible time. Best solution to convert Outlook OST to PST files: Convert OST to PST without fearing for data loss. An immaculate converter by Recover Data gives user a privilege to perform OST file recovery with no data loss.Bitbucket.NET: A high-performance .NET library for developing applications that use the Bitbucket service.caosu: aaaChartApp: Chart App for SharePoint 2013Custom Membership Provider SQL + LDAP with one login page: Custom membership provider to allow users to login to there portal from one login page whether its custom SQLDB or the current Active Directory.DroidBrowse: Web browser for Android 1.6 or later.haseebtestProject: This project is created to add random files and for testing purposesHiveSense: Hive monitoring system.JQuery File Upload Plugin with Backload server side component (Demo/Examples): Backload is a professional, full featured ASP.NET MVC 4 file upload controller and handler (server side). JSLocator: Locate javascript functions in the sourceMoonCMS: This is a trivial thing. It doesn't make any sense!MyLabs: MyLabs is Private Labpcvvpes: pcvvpesPrism Model Factory Extensions: Micro framework for model cloning, equality check and mergingSharePoint 2007 Solution and Packaging Guidance: WSPSolution is a standard for building SP 2007 solutions in Visual Studio, namespace planning, deployment planning, WSP creation, and build automation.Windows Phone Wi-Fi Launcher: Wi-Fi settings page launcher for Windows Phone 8.0

    Read the article

  • Do I need to store a generic rotation point/radius for rotating around a point other than the origin for object transforms?

    - by Casey
    I'm having trouble implementing a non-origin point rotation. I have a class Transform that stores each component separately in three 3D vectors for position, scale, and rotation. This is fine for local rotations based on the center of the object. The issue is how do I determine/concatenate non-origin rotations in addition to origin rotations. Normally this would be achieved as a Transform-Rotate-Transform for the center rotation followed by a Transform-Rotate-Transform for the non-origin point. The problem is because I am storing the individual components, the final Transform matrix is not calculated until needed by using the individual components to fill an appropriate Matrix. (See GetLocalTransform()) Do I need to store an additional rotation (and radius) for world rotations as well or is there a method of implementation that works while only using the single rotation value? Transform.h #ifndef A2DE_CTRANSFORM_H #define A2DE_CTRANSFORM_H #include "../a2de_vals.h" #include "CMatrix4x4.h" #include "CVector3D.h" #include <vector> A2DE_BEGIN class Transform { public: Transform(); Transform(Transform* parent); Transform(const Transform& other); Transform& operator=(const Transform& rhs); virtual ~Transform(); void SetParent(Transform* parent); void AddChild(Transform* child); void RemoveChild(Transform* child); Transform* FirstChild(); Transform* LastChild(); Transform* NextChild(); Transform* PreviousChild(); Transform* GetChild(std::size_t index); std::size_t GetChildCount() const; std::size_t GetChildCount(); void SetPosition(const a2de::Vector3D& position); const a2de::Vector3D& GetPosition() const; a2de::Vector3D& GetPosition(); void SetRotation(const a2de::Vector3D& rotation); const a2de::Vector3D& GetRotation() const; a2de::Vector3D& GetRotation(); void SetScale(const a2de::Vector3D& scale); const a2de::Vector3D& GetScale() const; a2de::Vector3D& GetScale(); a2de::Matrix4x4 GetLocalTransform() const; a2de::Matrix4x4 GetLocalTransform(); protected: private: a2de::Vector3D _position; a2de::Vector3D _scale; a2de::Vector3D _rotation; std::size_t _curChildIndex; Transform* _parent; std::vector<Transform*> _children; }; A2DE_END #endif Transform.cpp #include "CTransform.h" #include "CVector2D.h" #include "CVector4D.h" A2DE_BEGIN Transform::Transform() : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(nullptr), _children() { /* DO NOTHING */ } Transform::Transform(Transform* parent) : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(parent), _children() { /* DO NOTHING */ } Transform::Transform(const Transform& other) : _position(other._position), _scale(other._scale), _rotation(other._rotation), _curChildIndex(0), _parent(other._parent), _children(other._children) { /* DO NOTHING */ } Transform& Transform::operator=(const Transform& rhs) { if(this == &rhs) return *this; this->_position = rhs._position; this->_scale = rhs._scale; this->_rotation = rhs._rotation; this->_curChildIndex = 0; this->_parent = rhs._parent; this->_children = rhs._children; return *this; } Transform::~Transform() { _children.clear(); _parent = nullptr; } void Transform::SetParent(Transform* parent) { _parent = parent; } void Transform::AddChild(Transform* child) { if(child == nullptr) return; _children.push_back(child); } void Transform::RemoveChild(Transform* child) { if(_children.empty()) return; _children.erase(std::remove(_children.begin(), _children.end(), child), _children.end()); } Transform* Transform::FirstChild() { if(_children.empty()) return nullptr; return *(_children.begin()); } Transform* Transform::LastChild() { if(_children.empty()) return nullptr; return *(_children.end()); } Transform* Transform::NextChild() { if(_children.empty()) return nullptr; std::size_t s(_children.size()); if(_curChildIndex >= s) { _curChildIndex = s; return nullptr; } return _children[_curChildIndex++]; } Transform* Transform::PreviousChild() { if(_children.empty()) return nullptr; if(_curChildIndex == 0) { return nullptr; } return _children[_curChildIndex--]; } Transform* Transform::GetChild(std::size_t index) { if(_children.empty()) return nullptr; if(index > _children.size()) return nullptr; return _children[index]; } std::size_t Transform::GetChildCount() const { if(_children.empty()) return 0; return _children.size(); } std::size_t Transform::GetChildCount() { return static_cast<const Transform&>(*this).GetChildCount(); } void Transform::SetPosition(const a2de::Vector3D& position) { _position = position; } const a2de::Vector3D& Transform::GetPosition() const { return _position; } a2de::Vector3D& Transform::GetPosition() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetPosition()); } void Transform::SetRotation(const a2de::Vector3D& rotation) { _rotation = rotation; } const a2de::Vector3D& Transform::GetRotation() const { return _rotation; } a2de::Vector3D& Transform::GetRotation() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetRotation()); } void Transform::SetScale(const a2de::Vector3D& scale) { _scale = scale; } const a2de::Vector3D& Transform::GetScale() const { return _scale; } a2de::Vector3D& Transform::GetScale() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetScale()); } a2de::Matrix4x4 Transform::GetLocalTransform() const { Matrix4x4 p((_parent ? _parent->GetLocalTransform() : a2de::Matrix4x4::GetIdentity())); Matrix4x4 t(a2de::Matrix4x4::GetTranslationMatrix(_position)); Matrix4x4 r(a2de::Matrix4x4::GetRotationMatrix(_rotation)); Matrix4x4 s(a2de::Matrix4x4::GetScaleMatrix(_scale)); return (p * t * r * s); } a2de::Matrix4x4 Transform::GetLocalTransform() { return static_cast<const Transform&>(*this).GetLocalTransform(); } A2DE_END

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • Silverlight MEF – Download On Demand

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! A common challenge with building complex applications in Silverlight is the initial download size of the xap file.  MEF enables us to build composable applications that allows us to build complex composite applications.  Wouldn’t it be great if we had a mechanism to spilt out components into different Silverlight applications in separate xap files and download the separate xap file only if needed?   MEF gives us the ability to do this.  This post will cover the basics needed to build such a composite application split between different silerlight applications and download the referenced silverlight application only when needed. Steps: 1.     Create a Silverlight 4 application 2.     Add references to the following assemblies: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 3.     Add a new Silverlight 4 application called ExternalSilverlightApplication to the solution that was created in step 1.  Ensure the new application is hosted in the web application for the solution and choose to not create a test page for the new application. 4.     Delete the App.xaml and MainPage.xaml files – they aren’t needed. 5.     Add references to the following assemblies in the ExternalSilverlightApplication project: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 6.     Ensure the two references above have their Copy Local values set to false.  As we will have these two assmblies in the original Silverlight application, we will have no need to include them in the built ExternalSilverlightApplication build. 7.     Add a new user control called LeftControl to the ExternalSilverlightApplication project. 8.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Beige" Margin="40" >         <Button Content="Left Content" Margin="30"></Button>     </Grid> 9.     Add the following statement to the top of the LeftControl.xaml.cs file using System.ComponentModel.Composition; 10.   Add the following attribute to the LeftControl class     [Export(typeof(LeftControl))]   This attribute tells MEF that the type LeftControl will be exported – i.e. made available for other applications to import and compose into the application. 11.   Add a new user control called RightControl to the ExternalSilverlightApplication project. 12.   Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Green" Margin="40"  >         <TextBlock Margin="40" Foreground="White" Text="Right Control" FontSize="16" VerticalAlignment="Center" HorizontalAlignment="Center" ></TextBlock>     </Grid> 13.   Add the following statement to the top of the RightControl.xaml.cs file using System.ComponentModel.Composition; 14.   Add the following attribute to the RightControl class     [Export(typeof(RightControl))] 15.   In your original Silverlight project add a reference to the ExternalSilverlightApplication project. 16.   Change the reference to the ExternalSilverlightApplication project to have it’s Copy Local value = false.  This will ensure that the referenced ExternalSilverlightApplication Silverlight application is not included in the original Silverlight application package when it it built.  The ExternalSilverlightApplication Silverlight application therefore has to be downloaded on demand by the original Silverlight application for it’s controls to be used. 1.     In your original Silverlight project add the following xaml to the LayoutRoot Grid in MainPage.xaml:         <Grid.RowDefinitions>             <RowDefinition Height="65*" />             <RowDefinition Height="235*" />         </Grid.RowDefinitions>         <Button Name="LoaderButton" Content="Download External Controls" Click="Button_Click"></Button>         <StackPanel Grid.Row="1" Orientation="Horizontal" HorizontalAlignment="Center" >             <Border Name="LeftContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>             <Border Name="RightContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>         </StackPanel>       The borders will hold the controls that will be downlaoded, imported and composed via MEF when the button is clicked. 2.     Add the following statement to the top of the MainPage.xaml.cs file using System.ComponentModel.Composition; 3.     Add the following properties to the MainPage class:         [Import(typeof(LeftControl))]         public LeftControl LeftUserControl { get; set; }         [Import(typeof(RightControl))]         public RightControl RightUserControl { get; set; }   This defines properties accepting LeftControl and RightControl types.  The attrributes are used to tell MEF the discovered type that should be applied to the property when composition occurs. 17.   Add the following event handler for the button click to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {                   DeploymentCatalog deploymentCatalog =     new DeploymentCatalog("ExternalSilverlightApplication.xap");                   CompositionHost.Initialize(deploymentCatalog);                   deploymentCatalog.DownloadCompleted += (s, i) =>                 {                     if (i.Error == null)                     {                         CompositionInitializer.SatisfyImports(this);                           LeftContent.Child = LeftUserControl;                         RightContent.Child = RightUserControl;                         LoaderButton.IsEnabled = false;                     }                 };                   deploymentCatalog.DownloadAsync();         } This is where the magic happens!  The deploymentCatalog object is pointed to the ExternalSilverlightApplication.xap file.  It is then associated with the CompositionHost initialization.  As the download will be asynchronous, an eventhandler is created for the DownloadCompleted event.  The deploymentCatalog object is then told to start the asynchronous download. The event handler that executes when the download is completed uses the CompositionInitializer.SatisfyImports() function to tell MEF to satisfy the Imports for the current class.  It is at this point that the LeftUserControl and RightUserControl properties are initialized with composed objects from the downloaded ExternalSilverlightApplication.xap package. 18.   Run the application click the Download External Controls button and see the controls defined in the ExternalSilverlightApplication application loaded into the original Silverlight application. Congratulations!  You have implemented download on demand capabilities for composite applications using the MEF DeploymentCatalog class.  You are now able to segment your applications into separate xap file for deployment.

    Read the article

  • Creating vCard action result

    - by DigiMortal
    I added support for vCards to one of my ASP.NET MVC applications. I worked vCard support out as very simple and intelligent solution that fits perfectly to ASP.NET MVC applications. In this posting I will show you how to send vCards out as response to ASP.NET MVC request. We need three things: some vCard class, vCard action result, controller method to test vCard action result. Everything is very simple, let’s get hands on. vCard class As first thing we need vCard class. Last year I introduced vCard class that supports also images. Let’s take this class because it is easy to use and some dirty work is already done for us. NB! Take a look at ASP.NET example in the blog posting referred above. We need it later when we close the topic. Now think about how useful blogging and information sharing with others can be. With this class available at public I saved pretty much time now. :) vCardResult As we have vCard it is now time to write action result that we can use in our controllers. Here’s the code. public class vCardResult : ActionResult {     private vCard _card;       protected vCardResult() { }       public vCardResult(vCard card)     {         _card = card;     }       public override void ExecuteResult(ControllerContext context)     {         var response = context.HttpContext.Response;         response.ContentType = "text/vcard";         response.AddHeader("Content-Disposition", "attachment; fileName=" + _card.FirstName + " " + _card.LastName + ".vcf");           var cardString = _card.ToString();         var inputEncoding = Encoding.Default;         var outputEncoding = Encoding.GetEncoding("windows-1257");         var cardBytes = inputEncoding.GetBytes(cardString);           var outputBytes = Encoding.Convert(inputEncoding,                                 outputEncoding, cardBytes);           response.OutputStream.Write(outputBytes, 0, outputBytes.Length);     } } And we are done. Some notes: vCard is sent to browser as downloadable file (user can save or open it with Outlook or any other e-mail client that supports vCards), File name is made of first and last name of contact. Encoding is important because Outlook may not understand vCards otherwise (don’t know if this problem is solved in Outlook 2010). Using vCardResult in controller Now let’s tale a look at simple controller method that accepts person ID and returns vCardResult. public class ContactsController : Controller {       // ... other controller methods ...       public vCardResult vCard(int id)     {         var person = _partyRepository.GetPersonById(id);         var card = new vCard                 {                     FirstName=person.FirstName,                     LastName = person.LastName,                     StreetAddress = person.StreetAddress,                     City = person.City,                     CountryName = person.Country.Name,                       Mobile = person.Mobile,                     Phone = person.Phone,                     Email = person.Email,                 };           return new vCardResult(card);     } } Now you can run Visual Studio and check out how your vCard is moving from your web application to your e-mail client. Conclusion We took old code that worked well with ASP.NET Forms and we divided it into action result and controller method that uses vCard as bridge between our controller and action result. All functionality is located where it should be and we did nothing complex. We wrote only couple of lines of very easy code to achieve our goal. Do you understand now why I love ASP.NET MVC? :)

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • CodePlex Daily Summary for Monday, August 04, 2014

    CodePlex Daily Summary for Monday, August 04, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.11: V1.11 - Added the rename option in the map manager - Added the possibility to select map without starting the server - Upgraded the diagnosis page, now more usefull, more colorful, and with a big check/cross sign to tell you if your installation is healthy. Seriously, you must check it out ^^ - Corrected issue #7, now the asteroid count is the real number of asteroid of the current map - Corrected issue #8, the CPU/RAM value of the stats page should be much more accurate nowJson.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...VidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...HigLabo: HigLabo_20140801: ■Summary Add feature to create SmtpMessage object from System.Net.Mail.MailMessage, HigLabo.Mime.MailMessage. Add feature to get mail header only by using ImapClient. Add some of Portable class library project. Modify Async internal implementation. -------------------------------------------------- ■HigLabo.Converter Bug fix of QuotedPrintableConverter when last char of line is whitespace or tab. ■HigLabo.Core Modify code to add HigLabo.Core.Pcl project. ■HigLabo.Mail Add feature to create...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Dynamics CRM 2013 Global Advanced Find: Global Advanced Find Solution v1.0.0.0: Global Advanced Find Managed Solution v1.0.0.0Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...DataBind: DataBind 1.0: - Efficiency ImprovedConsole Progress Bar: ConsoleProgressBar-0.3: - Validate ProgressInPercentageGum UI Tool: Gum 0.6.07: Fixed bug related to setting the parent to screen bounds.Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.New ProjectsAndroid_Games: Free downloadsDailyProgrammer Challenge 173: My solution to /r/dailyprogrammer Challenge #173EmotionPing: This a littel software to make ping to a net ip rangeEntityMapper: Library for mapping (or projecting) Entity types (Domain Model) to DTO types (Contract). Made to be compatible with the Linq Provider for Entity Framework.kevinsheldon: private collectionKeyboard Auto-Complete for PC: Did you ever use the auto-complete in mobile phones? Why not on PC?? Now you can!LetsPlay: W co pograsz dzisiaj? Wybierz jedna z gier zainstalowanych w systemie i graj do woli:)Mini ORM like Enterprise library Data Block: A mini ORM like Enterprise library Data Block.ProjkyAddin ssms addin script shortcut ssmsaddin: ProjkyAddin ???Management Studio ??,??SSMS 2005、SSMS 2008、SSMS 2008 R2、SSMS 2012、SSMS 2014,?????????????Management Studio????????????,????????????????????。Reddit Downvote Bot: Reddit Downvote Bot , Mass Downvote a certain reddit user.SQL Server Dialog: This SQL Server Dialog is now release in v.1.0.0VNTalk Messenger: VNTalk MessengerWindyPaper: Test Project

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • ASP.NET MVC: Moving code from controller action to service layer

    - by DigiMortal
    I fixed one controller action in my application that doesn’t seemed good enough for me. It wasn’t big move I did but worth to show to beginners how nice code you can write when using correct layering in your application. As an example I use code from my posting ASP.NET MVC: How to implement invitation codes support. Problematic controller action Although my controller action works well I don’t like how it looks. It is too much for controller action in my opinion. [HttpPost] public ActionResult GetAccess(string accessCode) {     if(string.IsNullOrEmpty(accessCode.Trim()))     {         ModelState.AddModelError("accessCode", "Insert invitation code!");         return View();     }       Guid accessGuid;       try     {         accessGuid = Guid.Parse(accessCode);     }     catch     {         ModelState.AddModelError("accessCode", "Incorrect format of invitation code!");         return View();                    }       using(var ctx = new EventsEntities())     {         var user = ctx.GetNewUserByAccessCode(accessGuid);         if(user == null)         {             ModelState.AddModelError("accessCode", "Cannot find account with given invitation code!");             return View();         }           user.UserToken = User.Identity.GetUserToken();         ctx.SaveChanges();     }       Session["UserId"] = accessGuid;       return Redirect("~/admin"); } Looking at this code my first idea is that all this access code stuff must be located somewhere else. We have working functionality in wrong place and we should do something about it. Service layer I add layers to my application very carefully because I don’t like to use hand grenade to kill a fly. When I see real need for some layer and it doesn’t add too much complexity I will add new layer. Right now it is good time to add service layer to my small application. After that it is time to move code to service layer and inject service class to controller. public interface IUserService {     bool ClaimAccessCode(string accessCode, string userToken,                          out string errorMessage);       // Other methods of user service } I need this interface when writing unit tests because I need fake service that doesn’t communicate with database and other external sources. public class UserService : IUserService {     private readonly IDataContext _context;       public UserService(IDataContext context)     {         _context = context;     }       public bool ClaimAccessCode(string accessCode, string userToken, out string errorMessage)     {         if (string.IsNullOrEmpty(accessCode.Trim()))         {             errorMessage = "Insert invitation code!";             return false;         }           Guid accessGuid;         if (!Guid.TryParse(accessCode, out accessGuid))         {             errorMessage = "Incorrect format of invitation code!";             return false;         }           var user = _context.GetNewUserByAccessCode(accessGuid);         if (user == null)         {             errorMessage = "Cannot find account with given invitation code!";             return false;         }           user.UserToken = userToken;         _context.SaveChanges();           errorMessage = string.Empty;         return true;     } } Right now I used simple solution for errors and made access code claiming method to follow usual TrySomething() methods pattern. This way I can keep error messages and their retrieval away from controller and in controller I just mediate error message from service to view. Controller Now all the code is moved to service layer and we need also some modifications to controller code so it makes use of users service. I don’t show here DI/IoC details about how to give service instance to controller. GetAccess() action of controller looks like this right now. [HttpPost] public ActionResult GetAccess(string accessCode) {     var userToken = User.Identity.GetUserToken();     string errorMessage;       if (!_userService.ClaimAccessCode(accessCode, userToken,                                       out errorMessage))     {                       ModelState.AddModelError("accessCode", errorMessage);         return View();     }       Session["UserId"] = Guid.Parse(accessCode);     return Redirect("~/admin"); } It’s short and nice now and it deals with web site part of access code claiming. In the case of error user is shown access code claiming view with error message that ClaimAccessCode() method returns as output parameter. If everything goes fine then access code is reserved for current user and user is authenticated. Conclusion When controller action grows big you have to move code to layers it actually belongs. In this posting I showed you how I moved access code claiming functionality from controller action to user service class that belongs to service layer of my application. As the result I have controller action that coordinates the user interaction when going through access code claiming process. Controller communicates with service layer and gets information about how access code claiming succeeded.

    Read the article

  • New release of Microsoft All-In-One Code Framework is available for download - March 2011

    - by Jialiang
    A new release of Microsoft All-In-One Code Framework is available on March 8th. Download address: http://1code.codeplex.com/releases/view/62267#DownloadId=215627 You can download individual code samples or browse code samples grouped by technology in the updated code sample index. If it’s the first time that you hear about Microsoft All-In-One Code Framework, please read this Microsoft News Center article http://www.microsoft.com/presspass/features/2011/jan11/01-13codeframework.mspx, or watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/. -------------- New Silverlight code samples CSSLTreeViewCRUDDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215808 The code sample was created by Amit Dey. It demonstrates a custom TreeView with added functionalities of CRUD (Create, Read, Update, Delete) and drag-and-drop operations. Silverlight TreeView control with CRUD and drag & drop is a frequently asked programming question in Silverlight  forums. Many customers also requested this code sample in our code sample request service. We hope that this sample can reduce developers' efforts in handling this typical programming scenario. The following blog article introduces the sample in detail: http://blogs.msdn.com/b/codefx/archive/2011/02/15/silverlight-treeview-control-with-crud-and-drag-amp-drop.aspx. CSSL4FileDragDrop and VBSL4FileDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215809 http://1code.codeplex.com/releases/view/62253#DownloadId=215810 The code sample demonstrates the new drag&drop feature of Silverlight 4 to implement dragging picures from the local file system to a Silverlight application.   Sometimes we want to change SiteMapPath control's titles and paths according to Query String values. And sometimes we want to create the SiteMapPath dynamically. This code sample shows how to achieve these goals by handling SiteMap.SiteMapResolve event. CSASPNETEncryptAndDecryptConfiguration, VBASPNETEncryptAndDecryptConfiguration Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215027 http://1code.codeplex.com/releases/view/62253#DownloadId=215106 In this sample, we encrypt and decrypt some sensitive information in the config file of a web application by using the RSA asymmetric encryption. This project contains two snippets. The first one demonstrates how to use RSACryptoServiceProvider to generate public key and the corresponding private key and then encrypt/decrypt string value on page. The second part shows how to use RSA configuration provider to encrypt and decrypt configuration section in web.config of web application. connectionStrings section in plain text: Encrypted connectionString:  Note that if you store sensitive data in any of the following configuration sections, we cannot encrypt it by using a protected configuration provider <processModel> <runtime> <mscorlib> <startup> <system.runtime.remoting> <configProtectedData> <satelliteassemblies> <cryptographySettings> <cryptoNameMapping> CSASPNETFileUploadStatus Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215028 I believe ASP.NET programmers will like this sample, because in many cases we need customers know the current status of the uploading files, including the upload speed and completion percentage and so on. Under normal circumstances, we need to use COM components to accomplish this function, such as Flash, Silverlight, etc. The uploading data can be retrieved in two places, the client-side and the server-side. For the client, for the safety factors, the file upload status information cannot be got from JavaScript or server-side code, so we need COM component, like Flash and Silverlight to accomplish this, I do not like this approach because the customer need to install these components, but also we need to learn another programming framework. For the server side, we can get the information through coding, but the key question is how to tell the client results. In this case, We will combine custom HTTPModule and AJAX technology to illustrate how to analyze the HTTP protocol, how to break the file request packets, how to customize the location of the server-side file caching, how to return the file uploading status back to the client and so on . CSASPNETHighlightCodeInPage, VBASPNETHighlightCodeInPage Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215029 http://1code.codeplex.com/releases/view/62253#DownloadId=215108 This sample imitates a system that needs display the highlighted code in an ASP.NET page . As a matter of fact, sometimes we input code like C# or HTML in a web page and we need these codes to be highlighted for a better reading experience. It is convenient for us to keep the code in mind if it is highlighted. So in this case, the sample shows how to highlight the code in an ASP.NET page. It is not difficult to highlight the code in a web page by using String.Replace method directly. This  method can return a new string in which all occurrences of a specified string in the current instance are replaced with another specified string. However, it may not be a good idea, because it's not extremely fast, in fact, it's pretty slow. In addition, it is hard to highlight multiple keywords by using String.Replace method directly. Sometimes we need to copy source code from visual studio to a web page, for readability purpose, highlight the code is important while set the different types of keywords to different colors in a web page by using String.Replace method directly is not available. To handle this issue, we need to use a hashtable variable to store the different languages of code and their related regular expressions with matching options. Furthermore, define the css styles which used to highlight the code in a web page. The sample project can auto add the style object to the matching string of code. A step-by-step guide illustrating how to highlight the code in an ASP.NET page: 1. the HighlightCodePage.aspx page Choose a type of language in the dropdownlist control and paste the code in the textbox control, then click the HighLight button. 2.  Display the highlighted code in an ASP.NET page After user clicks the HighLight button, the highlighted code will be displayed at right side of the page.        CSASPNETPreventMultipleWindows Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215032 This sample demonstrates a step-by-step guide illustrating how to detect and prevent multiple windows or tab usage in Web Applications. The sample imitates a system that need to prevent multiple windows or tabs to solve some problems like sharing sessions, protect duplicated login, data concurrency, etc. In fact, there are many methods achieving this goal. Here we give a solution of use JavaScript, Sample shows how to use window.name property check the correct links and throw other requests to invalid pages. This code-sample use two user controls to make a distinction between base page and target page, user only need drag different controls to appropriate web form pages. so user need not write repetitive code in every page, it will make coding work lightly and convenient for modify your code.  JSVirtualKeyboard Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215093 This article describes an All-In-One framework sample that demonstrates a step-by-step guide illustrating how to build a virtual keyboard in your HTML page. Sometimes we may need to offer a virtual keyboard to let users input something without their real keyboards. This scenario often occurs when users will enter their password to get access to our sites and we want to protect the password from some kinds of back-door software, a Key-logger for example, and we will find a virtual keyboard on the page will be a good choice here. To create a virtual keyboard, we firstly need to add some buttons to the page. And when users click on a certain button, the JavaScript function handling the onclick event will input an appropriated character to the textbox. That is the simple logic of this feature. However, if we indeed want a virtual keyboard to substitute for the real keyboard completely, we will need more advanced logic to handle keys like Caps-Lock and Shift etc. That will be a complex work to achieve. CSASPNETDataListImageGallery Download: http://1code.codeplex.com/releases/view/62261#DownloadId=215267 This code sample demonstrates how to create an Image Gallery application by using the DataList control in ASP.NET. You may find the Image Gallery is widely used in many social networking sites, personal websites and E-Business websites. For example, you may use the Image Gallery to show a library of personal uploaded images on a personal website. Slideshow is also a popular tool to display images on websites. This code sample demonstrates how to use the DataList and ImageButton controls in ASP.NET to create an Image Gallery with image navigation. You can click on a thumbnail image in the Datalist control to display a larger version of the image on the page. This sample code reads the image paths from a certain directory into a FileInfo array. Then, the FileInfo array is used to populate a custom DataTable object which is bound to the Datalist control. This code sample also implements a custom paging system that allows five images to be displayed horizontally on one page. The following link buttons are used to implement a custom paging system:   •     First •     Previous •     Next •     Last Note We recommend that you use this method to load no more than five images at a time. You can also set the SelectedIndex property for the DataList control to limit the number of the thumbnail images that can be selected. To indicate which image is selected, you can set the SelectedStyle property for the DataList control. VBASPNETSearchEngine Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215112 This sample shows how to implement a simple search engine in an ASP.NET web site. It uses LIKE condition in SQL statement to search database. Then it highlights keywords in search result by using Regular Expression and JavaScript. New Windows General code samples CSCheckEXEType, VBCheckEXEType Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215045 http://1code.codeplex.com/releases/view/62253#DownloadId=215120 The sample demonstrates how to check an executable file type.  For a given executable file, we can get 1 whether it is a console application 2 whether it is a .Net application 3 whether it is a 32bit native application. 4 The full display name of a .NET application, e.g. System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL New Internet Explorer code samples CSIEExplorerBar, VBIEExplorerBar Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215060 http://1code.codeplex.com/releases/view/62253#DownloadId=215133 The sample demonstrates how to create and deploy an IE Explorer Bar which could list all the images in a web page. CSBrowserHelperObject, VBBrowserHelperObject Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215044 http://1code.codeplex.com/releases/view/62253#DownloadId=215119 The sample demonstrates how to create and deploy a Browser Helper Object,  and the BHO in this sample is used to disable the context menu in IE. New Windows Workflow Foundation code samples CSWF4ActivitiesCorrelation Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215085 Consider that there are two such workflow instances:       start                                   start          |                                           | Receive activity      Receive activity         |                                           | Receive2 activity      Receive2 activity         |                                           | A WCF request comes to call the second Receive2 activity. Which one should take care of the request? The answer is Correlation. This sample will show you how to correlate two workflow service to work together. -------------- New ASP.NET code samples CSASPNETBreadcrumbWithQueryString Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215022

    Read the article

  • The Business of Winning Innovation: An Exclusive Blog Series

    - by Kerrie Foy
    "The Business of Winning Innovation” is a series of articles authored by Oracle Agile PLM experts on what it takes to make innovation a successful and lucrative competitive advantage. Our customers have proven Agile PLM applications to be enormously flexible and comprehensive, so we’ve launched this article series to showcase some of the most fascinating, value-packed use cases. In this article by Keith Colonna, we kick-off the series by taking a look at the science side of innovation within the Consumer Products industry and how PLM can help companies innovate faster, cheaper, smarter. This article will review how innovation has become the lifeline for growth within consumer products companies and how certain companies are “winning” by creating a competitive advantage for themselves by taking a more enterprise-wide,systematic approach to “innovation”.   Managing the Science of Innovation within the Consumer Products Industry By: Keith Colonna, Value Chain Solution Manager, Oracle The consumer products (CP) industry is very mature and competitive. Most companies within this industry have saturated North America (NA) with their products thus maximizing their NA growth potential. Future growth is expected to come from either expansion outside of North America and/or by way of new ideas and products. Innovation plays an integral role in both of these strategies, whether you’re innovating business processes or the products themselves, and may cause several challenges for the typical CP company, Becoming more innovative is both an art and a science. Most CP companies are very good at the art of coming up with new innovative ideas, but many struggle with perfecting the science aspect that involves the best practice processes that help companies quickly turn ideas into sellable products and services. Symptoms and Causes of Business Pain Struggles associated with the science of innovation show up in a variety of ways, like: · Establishing and storing innovative product ideas and data · Funneling these ideas to the chosen few · Time to market cycle time and on-time launch rates · Success rates, or how often the best idea gets chosen · Imperfect decision making (i.e. the ability to kill projects that are not projected to be winners) · Achieving financial goals · Return on R&D investment · Communicating internally and externally as more outsource partners are added globally · Knowing your new product pipeline and project status These challenges (and others) can be consolidated into three root causes: A lack of visibility Poor data with limited access The inability to truly collaborate enterprise-wide throughout your extended value chain Choose the Right Remedy Product Lifecycle Management (PLM) solutions are uniquely designed to help companies solve these types challenges and their root causes. However, PLM solutions can vary widely in terms of configurability, functionality, time-to-value, etc. Business leaders should evaluate PLM solution in terms of their own business drivers and long-term vision to determine the right fit. Many of these solutions are point solutions that can help you cure only one or two business pains in the short term. Others have been designed to serve other industries with different needs. Then there are those solutions that demo well but are owned by companies that are either unable or unwilling to continuously improve their solution to stay abreast of the ever changing needs of the CP industry to grow through innovation. What the Right PLM Solution Should Do for You Based on more than twenty years working in the CP industry, I recommend investing in a single solution that can help you solve all of the issues associated with the science of innovation in a totally integrated fashion. By integration I mean the (1) integration of the all of the processes associated with the development, maintenance and delivery of your product data, and (2) the integration, or harmonization of this product data with other downstream sources, like ERP, product catalogues and the GS1 Global Data Synchronization Network (or GDSN, which is now a CP industry requirement for doing business with most retailers). The right PLM solution should help you: Increase Revenue. A best practice PLM solution should help a company grow its revenues by consolidating product development cycle-time and helping companies get new and improved products to market sooner. PLM should also eliminate many of the root causes for a product being returned, refused and/or reclaimed (which takes away from top-line growth) by creating an enterprise-wide, collaborative, workflow-driven environment. Reduce Costs. A strong PLM solution should help shave many unnecessary costs that companies typically take for granted. Rationalizing SKU’s, components (ingredients and packaging) and suppliers is a major opportunity at most companies that PLM should help address. A natural outcome of this rationalization is lower direct material spend and a reduction of inventory. Another cost cutting opportunity comes with PLM when it helps companies avoid certain costs associated with process inefficiencies that lead to scrap, rework, excess and obsolete inventory, poor end of life administration, higher cost of quality and regulatory and increased expediting. Mitigate Risk. Risks are the hardest to quantify but can be the most costly to a company. Food safety, recalls, line shutdowns, customer dissatisfaction and, worst of all, the potential tarnishing of your brands are a few of the debilitating risks that CP companies deal with on a daily basis. These risks are so uniquely severe that they require an enterprise PLM solution specifically designed for the CP industry that safeguards product information and processes while still allowing the art of innovation to flourish. Many CP companies have already created a winning advantage by leveraging a single, best practice PLM solution to establish an enterprise-wide, systematic approach to innovation. Oracle’s Answer for the Consumer Products Industry Oracle is dedicated to solving the growth and innovation challenges facing the CP industry. Oracle’s Agile Product Lifecycle Management for Process solution was originally developed with and for CP companies and is driven by a specialized development staff solely focused on maintaining and continuously improving the solution per the latest industry requirements. Agile PLM for Process helps CP companies handle all of the processes associated with managing the science of the innovation process, including: specification management, new product development/project and portfolio management, formulation optimization, supplier management, and quality and regulatory compliance to name a few. And as I mentioned earlier, integration is absolutely critical. Many Oracle CP customers, both with Oracle ERP systems and non-Oracle ERP systems, report benefits from Oracle’s Agile PLM for Process. In future articles we will explain in greater detail how both existing Oracle customers (like Gallo, Smuckers, Land-O-Lakes and Starbucks) and new Oracle customers (like ConAgra, Tyson, McDonalds and Heinz) have all realized the benefits of Agile PLM for Process and its integration to their ERP systems. More to Come Stay tuned for more articles in our blog series “The Business of Winning Innovation.” While we will also feature articles focused on other industries, look forward to more on how Agile PLM for Process addresses innovation challenges facing the CP industry. Additional topics include: Innovation Data Management (IDM), New Product Development (NPD), Product Quality Management (PQM), Menu Management,Private Label Management, and more! . Watch this video for more info about Agile PLM for Process

    Read the article

  • Poner aplicaci&oacute;n Asp.Net en modo OFFLINE

    - by Jason Ulloa
    Una de las opciones que todo aplicación debería tener es el poder ponerse en modo OFFLINE para evitar el acceso de usuarios. Esto es completamente necesario cuando queremos realizar cambios a nuestra aplicación (cambiar algo, poner una actualización, etc) o a nuestra base de datos y evitarnos problemas con los usuarios que se encuentren logueados dentro de la aplicación en ese momento. Muchos ejemplos a través de la Web exponen la forma de realizar esta tarea utilizando dos técnicas: 1. La primera de ellas es utilizar el archivo App_Offline.htm sin embargo, esta técnica tiene un inconveniente. Y es que, una vez que hemos subido el archivo a nuestra aplicación esta se bloquea completamente y no tenemos forma de volver a ponerla ONLINE a menos que eliminemos el archivo. Es decir no podemos controlarla. 2. La segunda de ellas es el utilizar la etiqueta httpRuntime, pero nuevamente tenemos el mismo problema. Al habilitar el modo OFFLINE mediante esta etiqueta, tampoco podremos acceder a un modo de administración para cambiarla. Un ejemplo de la etiqueta httpRuntime <configuration> <system.web> <httpRuntime enable="false" /> </system.web> </configuration>   Tomando en cuenta lo anterior, lo mas optimo seria que podamos por medio de alguna pagina de administración colocar nuestro sitio en modo OFFLINE, pero manteniendo el acceso a la pagina de administración para poder volver a cambiar el valor que pondrá nuestra aplicación nuevamente en modo ONLINE. Para ello, utilizaremos el web.config de nuestra aplicación y una pequeña clase que se encargara de Leer y escribir los valores. Lo primero será, abrir nuestro web.config y definir dentro del appSettings dos nuevas KEY que contendrán los valores para el modo OFFLINE de nuestra aplicación: <appSettings> <add key="IsOffline" value="false" /> <add key="IsOfflineMessage" value="Sistema temporalmente no disponible por tareas de mantenimiento." /> </appSettings>   En las KEY anteriores tenemos el IsOffLine con value de false, esto es para indicarle a nuestra aplicación que actualmente su modo de funcionamiento es ONLINE, este valor será el que posteriormente cambiemos a TRUE para volver al modo OFFLINE. Nuestra segunda KEY (IsOfflineMessage) posee el value (Sistema temporalmente….) que será mostrado al usuario como un mensaje cuando el sitio este en modo OFFLINE. Una vez definidas nuestras dos KEY en el web.config, escribiremos una clase personalizada para leer y escribir los valores. Así que, agregamos un nuevo elemento de tipo clase al proyecto llamado SettingsRules y la definimos como Public. Está clase contendrá dos métodos, el primero será para leer los valores: public string readIsOnlineSettings(string sectionToRead) { Configuration cfg = WebConfigurationManager.OpenWebConfiguration(System.Web.Hosting.HostingEnvironment.ApplicationVirtualPath); KeyValueConfigurationElement isOnlineSettings = (KeyValueConfigurationElement)cfg.AppSettings.Settings[sectionToRead]; return isOnlineSettings.Value; }   El segundo método, será el encargado de escribir los nuevos valores al web.config public bool saveIsOnlineSettings(string sectionToWrite, string value) { bool succesFullySaved;   try { Configuration cfg = WebConfigurationManager.OpenWebConfiguration(System.Web.Hosting.HostingEnvironment.ApplicationVirtualPath); KeyValueConfigurationElement repositorySettings = (KeyValueConfigurationElement)cfg.AppSettings.Settings[sectionToWrite];   if (repositorySettings != null) { repositorySettings.Value = value; cfg.Save(ConfigurationSaveMode.Modified); } succesFullySaved = true; } catch (Exception) { succesFullySaved = false; } return succesFullySaved; }   Por último, definiremos en nuestra clase una región llamada instance, que contendrá un método encargado de devolver una instancia de la clase (esto para no tener que hacerlo luego) #region instance   private static SettingsRules m_instance;   // Properties public static SettingsRules Instance { get { if (m_instance == null) { m_instance = new SettingsRules(); } return m_instance; } }   #endregion instance   Con esto, nuestra clase principal esta completa. Así que pasaremos a la implementación de las páginas y el resto de código que completará la funcionalidad.   Para complementar la tarea del web.config utilizaremos el fabuloso GLOBAL.ASAX, este contendrá el código encargado de detectar si nuestra aplicación tiene el valor de ONLINE o OFFLINE y además de bloquear todas las paginas y directorios excepto el que le hayamos definido como administrador, esto para luego poder volver a configurar el sitio.   El evento del Global.Asax que utilizaremos será el Application_BeginRequest   protected void Application_BeginRequest(Object sender, EventArgs e) {   if (Convert.ToBoolean(SettingsRules.Instance.readIsOnlineSettings("IsOffline"))) {   string Virtual = Request.Path.Substring(0, Request.Path.LastIndexOf("/") + 1);   if (Virtual.ToLower().IndexOf("/admin/") == -1) { //We don't makes action, is admin section Server.Transfer("~/TemporarilyOfflineMessage.aspx"); }   } } La primer Línea del IF, verifica si el atributo del web.config es True o False, si es true toma la dirección WEB que se ha solicitado y la incluimos en un IF para verificar si corresponde a la Sección admin (está sección no es mas que un folder en nuestra aplicación llamado admin y puede ser cambiado a cualquier otro). Si el resultado de ese if es –1 quiere decir que no coincide, entonces, esa será la bandera que nos permitirá bloquear inmediatamente la pagina actual, transfiriendo al usuario a una pagina de mantenimiento. Ahora, en nuestra carpeta Admin crearemos una nueva pagina asp.net llamada OnlineSettings.aspx para actualizar y leer los datos del web.config y una pagina Default.aspx para pruebas. Nuestra página OnlineSettings tendrá dos pasos importantes: 1. Leer los datos actuales de configuración protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { IsOffline.Checked = Convert.ToBoolean(mySettings.readIsOnlineSettings("IsOffline")); OfflineMessage.Text = mySettings.readIsOnlineSettings("IsOfflineMessage"); } }   2. Actualizar los datos con los nuevos valores. protected void UpdateButton_Click(object sender, EventArgs e) { string htmlMessage = OfflineMessage.Text.Replace(Environment.NewLine, "<br />");   // Update the Application variables Application.Lock(); if (IsOffline.Checked) { mySettings.saveIsOnlineSettings("IsOffline", "True"); mySettings.saveIsOnlineSettings("IsOfflineMessage", htmlMessage); } else { mySettings.saveIsOnlineSettings("IsOffline", "false"); mySettings.saveIsOnlineSettings("IsOfflineMessage", htmlMessage); }   Application.UnLock(); }   Por último en la raíz de la aplicación, crearemos una nueva página aspx llamada TemporarilyOfflineMessage.aspx que será la que se muestre cuando se bloquee la aplicación. Al final nuestra aplicación se vería algo así Página bloqueada Configuración del Bloqueo Y para terminar la aplicación de ejemplo

    Read the article

  • Get Application Title from Windows Phone

    - by psheriff
    In a Windows Phone application that I am currently developing I needed to be able to retrieve the Application Title of the phone application. You can set the Deployment Title in the Properties of your Windows Phone Application, however getting to this value programmatically can be a little tricky. This article assumes that you have Visual Studio 2010 and the Windows Phone tools installed along with it. The Windows Phone tools must be downloaded separately and installed with Visual Studio2010. You may also download the free Visual Studio2010 Express for Windows Phone developer environment. The WMAppManifest.xml File First off you need to understand that when you set the Deployment Title in the Properties windows of your Windows Phone application, this title actually gets stored into an XML file located under the \Properties folder of your application. This XML file is named WMAppManifest.xml. A portion of this file is shown in the following listing. <?xml version="1.0" encoding="utf-8"?><Deployment  http://schemas.microsoft.com/windowsphone/2009/deployment"http://schemas.microsoft.com/windowsphone/2009/deployment"  AppPlatformVersion="7.0">  <App xmlns=""       ProductID="{71d20842-9acc-4f2f-b0e0-8ef79842ea53}"       Title="Mobile Time Track"       RuntimeType="Silverlight"       Version="1.0.0.0"       Genre="apps.normal"       Author="PDSA, Inc."       Description="Mobile Time Track"       Publisher="PDSA, Inc."> ... ...  </App></Deployment> Notice the “Title” attribute in the <App> element in the above XML document. This is the value that gets set when you modify the Deployment Title in your Properties Window of your Phone project. The only value you can set from the Properties Window is the Title. All of the other attributes you see here must be set by going into the XML file and modifying them directly. Note that this information duplicates some of the information that you can also set from the Assembly Information… button in the Properties Window. Why Microsoft did not just use that information, I don’t know. Reading Attributes from WMAppManifest I searched all over the namespaces and classes within the Windows Phone DLLs and could not find a way to read the attributes within the <App> element. Thus, I had to resort to good old fashioned XML processing. First off I created a WinPhoneCommon class and added two static methods as shown in the snippet below: public class WinPhoneCommon{  /// <summary>  /// Returns the Application Title   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application title</returns>  public static string GetApplicationTitle()  {    return GetWinPhoneAttribute("Title");  }   /// <summary>  /// Returns the Application Description   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application description</returns>  public static string GetApplicationDescription()  {    return GetWinPhoneAttribute("Description");  }   ... GetWinPhoneAttribute method here ...} In your Windows Phone application you can now simply call WinPhoneCommon.GetApplicationTitle() or WinPhone.GetApplicationDescription() to retrieve the Title or Description properties from the WMAppManifest.xml file respectively. You notice that each of these methods makes a call to the GetWinPhoneAttribute method. This method is shown in the following code snippet: /// <summary>/// Gets an attribute from the Windows Phone WMAppManifest.xml file/// To use this method, add a reference to the System.Xml.Linq DLL/// </summary>/// <param name="attributeName">The attribute to read</param>/// <returns>The Attribute's Value</returns>private static string GetWinPhoneAttribute(string attributeName){  string ret = string.Empty;   try  {    XElement xe = XElement.Load("WMAppManifest.xml");    var attr = (from manifest in xe.Descendants("App")                select manifest).SingleOrDefault();    if (attr != null)      ret = attr.Attribute(attributeName).Value;  }  catch  {    // Ignore errors in case this method is called    // from design time in VS.NET  }   return ret;} I love using the new LINQ to XML classes contained in the System.Xml.Linq.dll. When I did a Bing search the only samples I found for reading attribute information from WMAppManifest.xml used either an XmlReader or XmlReaderSettings objects. These are fine and work, but involve a little extra code. Instead of using these, I added a reference to the System.Xml.Linq.dll, then added two using statements to the top of the WinPhoneCommon class: using System.Linq;using System.Xml.Linq; Now, with just a few lines of LINQ to XML code you can read to the App element and extract the appropriate attribute that you pass into the GetWinPhoneAttribute method. Notice that I added a little bit of exception handling code in this method. I ignore the exception in case you call this method in the Loaded event of a user control. In design-time you cannot access the WMAppManifest file and thus an exception would be thrown. Summary In this article you learned how to retrieve the attributes from the WMAppManifest.xml file. I use this technique to grab information that I would otherwise have to hard-code in my application. Getting the Title or Description for your Windows Phone application is easy with just a little bit of LINQ to XML code. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get Application Title from Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • CodePlex Daily Summary for Saturday, November 09, 2013

    CodePlex Daily Summary for Saturday, November 09, 2013Popular ReleasesCoolpy: CoolpyI: Coolpy???,????rom??????window phone????????????。???????????Praxis2: Especificaciones de Casos de Uso Iteracción 1: Especificaciones de Casos de Uso Iteracción 1 Responsables Anderson CU Buscar Obra CU Registrar Obra CU Registrar Alquiler Juan Victor CU Buscar Cliente CU Registrar Cliente CU Registrar EntregaMedia Companion: Media Companion MC3.586b: Tv - Multi-episodes restored to MCThere's been a plenty of bug fixes occuring lately, with IMDB changing their info, and some great feed-back by users. But Thanks to Billyad2000, Multi-episodes, are now displaying correctly in Media Companion, complete with all functionality. This was a hard effort, with more than a few dev's in the past having looked at this code to get it working. But, like a light-bulb going off, Billy's managed to massage the code, and restore this much missed function...Dynamics AX 2012 R2 Kitting: AX 2012 R2 CU7 release of Kitting: Here is the AX 2012 R2 CU7 release of kitting. Released both as a XPO and a model.PantheR's GraphX for .NET: GraphX for .NET RELEASE v1.0.1: PLEASE RATE THIS RELEASE IF YOU LIKED IT! THANKS! :) RELEASE 1.0.1 + Changed ExportToImage() parameters: added useZoomControlSurface param that enables zoom control parent visual space to be used for export instead whole GraphArea panel. Using this technique it is possible to export graphs with negative vertices coordinates. + Added common interface IZoomControl for all included Zoom controls + Added new method GraphArea.GenerateGraph() that accepts only optional parameters and will use in...ConEmu - Windows console with tabs: ConEmu 131107 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe"Team Foundation Server Upgrade Guide: v3 - TFS 2013 Upgrade Guide: Welcome to the Team Foundation Server Upgrade Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Known issues NoneUpgrading SharePoint section is not included yet. Independent technical review is pending.VidCoder: 1.5.12 Beta: Added an option to preserve Created and Last Modified times when converting files. In Options -> Advanced. Added an option to mark an automatically selected subtitle track as "Default". Updated HandBrake core to SVN 5878. Fixed auto passthrough not applying just after switching to it. Fixed bug where preset/profile/tune could disappear when reverting a preset.Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2013.9.25): XrmToolbox improvement Correct changing connection from the status dropdown Tools improvement Updated tool Audit Center (v1.2013.9.10) -> Publish entities Iconator (v1.2013.9.27) -> Optimized asynchronous loading of images and entities MetadataDocumentGenerator (v1.2013.11.6) -> Correct system entities reading with incorrect attribute type Script Manager (v1.2013.9.27) -> Retrieve only custom events SiteMapEditor (v1.2013.11.7) -> Reset of CRM 2013 SiteMap ViewLayoutReplicator (v1.201...Microsoft SQL Server Product Samples: Database: SQL Server 2014 CTP2 In-Memory OLTP Sample, based: This sample showcases the new In-Memory OLTP feature, which is part of SQL Server 2014 CTP2. It shows the new memory-optimized tables and natively-compiled stored procedures, and can be used to show the performance benefit of in-memory OLTP. Installation instructions for the sample are included in the file ‘awinmemsample.doc’, which is part of the download. You can ask a question about this sample at the SQL Server Samples Forum Composite C1 CMS - Open Source on .NET: Composite C1 4.1: Composite C1 4.1 (4.1.5058.34326) Write a review for this release - help us improve, recommend us. Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.1 The following are highlights of major changes since Composite C1 4.0: General user features: Drag-and-drop images and files like PDF and Word directly from own your desktop and folders into page content Allow you to install Composite Form Builder ...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Home Access Plus+: v9.7: Updated: JSON.net Fixed: Issue with the Windows 8 App Added: Windows 8.1 App Added: Win: Self Signed HAP+ Install Support Added: Win: Delete File Support Added: Timeout for the Logon Tracker Removed: Error Dialogs on the User Card Fixed: Green line showing over the booking form Note: a web.config file update is requiredWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net Visual Studio Runner: A placeholder for downloading Visual Studio runner VSIX files, in case the Gallery is down (or you want to downgrade to older versions).VeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...Duplica: duplica 0.2.498: this is first stable releaseDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerVG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesNew ProjectsAppBootloader: ???CS????????????? Let your C\S program more flexible for automatic updatesArduino Visual Studio: Purpose of this project is to demonstrate using Visual Studio 2012 with Atmel chip on a Arduino UNO board.ASP.NET Identity: ASP.NET Identity is the new membership system for building ASP.NET web applications. ASP.NET Identity allows you to add login features to your application and mAsset Maintenance Management Console: A.M.M.C is an attempt at creating an extremely versatile interface tool to maintain assets.AX 2012 R2 SYNC: SYNC for AX 2012 introduces a centralized company concept which holds and manages the enterprise-wide master data for synchronizing across multiple companies.BYOND - Build Your OwN Device (audio synths, effects, DSP, sequencer, VST): Byond is an environment for audio and midi programming in C#. It's available as VST plugin or standalone application.CoveSmushbox: A simple .NET library and a Windows CLI to the SMUSH Box.FORMULA 2.0: Formula specifications are highly declarative logic programs that can express rich synthesis and verification problems.Grostbite Engine: Free 3D game engine.hMailServer from RoundCube: A RoundCube plugin for interacting with hMailServer 5.4B1950. The plugin allows to configure vacation configuration of hMailServer from RoundCube.KDG's IP Reporter: Baisc reporter for local and private IP addresses.LADNS Service Watcher: LADNS Service WatcherLampguiden: LampguidenMedia Recommender Service: We are 6 software engineer students developing a media recommendation service as part of our 3rd semester project. photograp: SPA Photo gallery. Work in progress... Power Buddy: The Windows power tray icon only displays two power plans. If this has bothered you since 2009, Power Buddy is for you. Power Buddy displays all of them.Programming Demos: This project contains demonstration code that may be helpful to people learning VisualBasic .NET.Prototype : Traveling Alone Website: A website aiming to become an online community for solo travelers.ShowDBPool: ShowDBPoolThali: Thali is about making it falling off a log easy for users to run their own services on their own devices by building a peer to peer web.TiendaWebCursoAccentureNet: aWpf PdfReader: This is a pdf reader, development based WPF and MuPDF,You can use the keyboard to operate it.This is pdf reader can save the user's open records.

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >