Search Results

Search found 18702 results on 749 pages for 'digital input'.

Page 690/749 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Error in eclipse on run android project

    - by Larz
    I am trying to get a simple hello world android project working in eclipse using an android emulator. I have been using the examples on developer.android.com. I actually did have a hello world app working. I then modified it's xml files to have a text input field and a button as in the second example shows on that site. This failed to run on the emulator. I then went back and tried to create another simple hello world project, but it fails to run. The console says "Waiting for HOME ('android.process.acore') to be launched, but nothing happens or sometimes a messenger in the emulator says "unfortunately Android Wear has stopped". Below is a sample error filter on the log file. I find trying to debug this is something new to me and I am not sure the best way to go about it. I am just trying to learn some basic android developer skills. 05-30 16:19:07.336: E/SELinux(469): SELinux: Loaded file_contexts from /file_contexts, 05-30 16:19:07.336: E/SELinux(469): digest= 05-30 16:19:07.376: E/SELinux(469): b0 05-30 16:19:07.376: E/SELinux(469): 4b 05-30 16:19:07.756: E/SELinux(469): 03 05-30 16:19:07.756: E/SELinux(469): 4a 05-30 16:19:07.826: E/SELinux(469): 73 05-30 16:19:07.886: E/SELinux(469): ab 05-30 16:19:07.886: E/SELinux(469): 6d 05-30 16:19:07.896: E/SELinux(469): 46 05-30 16:19:07.896: E/SELinux(469): b4 05-30 16:19:07.896: E/SELinux(469): a5 05-30 16:19:07.896: E/SELinux(469): 73 05-30 16:19:07.896: E/SELinux(469): 8a 05-30 16:19:07.896: E/SELinux(469): ee 05-30 16:19:07.896: E/SELinux(469): ac 05-30 16:19:07.906: E/SELinux(469): 68 05-30 16:19:07.906: E/SELinux(469): ff 05-30 16:19:07.906: E/SELinux(469): 04 05-30 16:19:07.906: E/SELinux(469): dc 05-30 16:19:07.906: E/SELinux(469): b8 05-30 16:19:07.906: E/SELinux(469): a2 05-30 16:19:11.806: E/SensorManager(511): sensor or listener is null 05-30 16:19:16.196: E/BluetoothAdapter(378): Bluetooth binder is null 05-30 16:19:16.206: E/BluetoothAdapter(378): Bluetooth binder is null 05-30 16:19:17.186: E/WVMExtractor(54): Failed to open libwvm.so: dlopen failed: library "libwvm.so" not found 05-30 16:19:17.776: E/AudioCache(54): Error 1, -2147483648 occurred 05-30 16:19:17.796: E/SoundPool(378): Unable to load sample: (null) 05-30 16:19:18.536: E/AudioCache(54): Error 1, -2147483648 occurred 05-30 16:19:18.546: E/SoundPool(378): Unable to load sample: (null)

    Read the article

  • Linux bonded Interfaces hanging periodically

    - by David
    I've several hosts that are showing problems with connectivity. When working from the command line, for example, typing is frozen for a second or so, then recovers - then it does it again. The most egregious example host would freeze (input) for 15-30 seconds, then recover and go out 5 seconds later. Switching cables didn't do anything - but removing one of the physical cables caused everything to clear up instantly (which why I think this is a network problem). Looking at the network I couldn't see any packets floating that would explain this. These ethernet interfaces (Gigabit Dell) were working normally previously, but since we moved the systems - and put them on a new set of switches - this has been a problem on multiple theoretically identically-configured hosts. The original switches were an HP Procurve 1810-24G and an HP Procurve 1800-24G connected with LLDP; the new switches are both Cisco SG 200-26, which I understand are rebranded Linksys switches. Is this caused by a problem with the switches? Is it the switch configurations? Are the Cisco switches incapable of handling this? I don't see where the configuration is located; I searched the usual /etc/sysconfig/network/devices but there's nothing in there about options (like mii polling) and nothing about the method of balancing the two. Searching scripts, I can't find anything in /etc/init.d/network either. The hosts are almost all Red Hat Enterprise Linux 5.x systems (5.6, 5.7) but some are Ubuntu Server 10.04.3 Lucid Lynx. I need help with both if it comes to that. UPDATE: We're also seeing some problems with servers on the original switches. The HP switches and the Cisco switches are also interconnected (temporarily); there is a cable run from one switch to the next. Pings on any of these hosts show about one ICMP packet out of every 5-6 getting dropped (timed out). Could there be an interaction between the two switches? Oh, and the hosts are using bonding with Balance-RR as the method.

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files)

    - by user11955
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

  • A name was started with an invalid character. Error processing resource

    - by Gallen
    Here is the exact error I'm getting when I try to launch my default.aspx file from the published folder. Can anybody point me in the right direction? The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- A name was started with an invalid character. Error processing resource 'file:///C:/inetpub/wwwroot/MHNProServices/Default.... <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs"... Here are the contents of default.aspx <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="False" CodeBehind="Default.aspx.cs" Inherits="MHNProServices.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <link type="text/css" href="css/Default.css" rel="Stylesheet" /> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id="contentHead"> <img src="css/img/heading_landing.jpg" /> </div> <div id="contentTop"></div> <div id="content"> <div id="contentLeft"> <asp:Image ID="displayPicture" runat="server" /> <img id="displayOverlay"src="css/img/profilepicture_overlay.gif" /> <a id="contentButton_makeAppointment" href="Appointments.aspx?step=start"></a> <a id="contentButton_cancelAppointment" href="Appointments.aspx?step=cancel"></a> </div> <div id="contentRight"> <h3><asp:Label ID="lbl_homepageHeader" runat="server" Text=""></asp:Label></h3> <hr /> <asp:Label ID="lbl_homepageContent" runat="server" Text=""></asp:Label> </div> </div> <div id="contentBottom"></div> </asp:Content>

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • Remote access to internal machine (ssh port-forwarding)

    - by MacUsers
    I have a server (serv05) at work with a public ip, hosting two KVM guests - vtest1 & vtest2 - in two different private network - 192.168.122.0 & 192.168.100.0 - respectively, this way: [root@serv05 ~]# ip -o addr show | grep -w inet 1: lo inet 127.0.0.1/8 scope host lo 2: eth0 inet xxx.xxx.xx.197/24 brd xxx.xxx.xx.255 scope global eth0 4: virbr1 inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1 6: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 # [root@serv05 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 xxx.xxx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 xxx.xxx.xx.62 0.0.0.0 UG 0 0 0 eth0 I've also setup IP FORWARDing and Masquerading this way: iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface virbr0 -j ACCEPT All works up to this point. If I want to remote access vtest1 (or vtest2) first I ssh to serv05 and then from there ssh to vtest1. Is there a way to setup a port forwarding so that vtest1 can be accessed directly from the outside world? This is what I probably need to setup: external_ip (tcp port 4444) -> DNAT -> 192.168.122.50 (tcp port 22) I know it's easily do'able using a SOHO router but can't figure out how can I do that on a Linux box. Any help form you guys?? Cheers!! Update: 1 Now I've made ssh to listen to both of the ports: [root@serv05 ssh]# netstat -tulpn | grep ssh tcp 0 0 xxx.xxx.xx.197:22 0.0.0.0:* LISTEN 5092/sshd tcp 0 0 xxx.xxx.xx.197:4444 0.0.0.0:* LISTEN 5092/sshd and port 4444 is allowed in the iptables rules: [root@serv05 sysconfig]# grep 4444 iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 4444 -j DNAT --to-destination 192.168.122.50:22 -A INPUT -p tcp -m state --state NEW -m tcp --dport 4444 -j ACCEPT -A FORWARD -i eth0 -p tcp -m tcp --dport 4444 -j ACCEPT But I'm getting connection refused: maci:~ santa$ telnet serv05 4444 Trying xxx.xxx.xx.197... telnet: connect to address xxx.xxx.xx.197: Connection refused telnet: Unable to connect to remote host Any idea what's I'm still missing? Cheers!!

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • Cooling Server Rack with Water? Sensible? Reuse energy for small installation?

    - by TomTom
    First - this is not a shopping question, this is not so much about concrete prices but about general feasibility. Makes no sense to get looking fo ra manufacturer it the approach is bad. I am moving my company to new Offices in September, and among them we will expand and consolidate our number crunch cluster. It is so far in a data center. I have a nice room in the basement prepared now. I think about cooling. We will likely run up a power usage of around 10kw by end of the year. That is a LOT of stuff, and cooling will be expensive. I am located in south Poland, close to the German border. This is an area where water is available for relatively cheap price - "wasting water" is not a concern here. My situation is thus a lot different for example than in Spain ;) Physics tells me that to heat 1 liter of water by 1 degree I use 1 Calorie (1KCal), and a kwh power is (and we can assume 100% efficiency - water heaters are pretty efficient) 750 Calories. That means that 1 KWH is 750 liter by 1 degree. 10kw and a 20 degree heat would mean that per hour I need 375 liters. That is 6.25 liters per minute and not WHAT much ;) We talk 270 cubic meters here. Even in summer, the significant underground pipes really cool down the water a LOT more ;) Question: This such an approach feasible? Anyone done that? We talk of a 10kw installation for now. Is it feasible to reuse that heat? The alternative is a decent cooling system that WILL use around 2.5kwh for running. Dropping the water would basically (a) get me a quite cold input compared to the outside air even in summer (I.e. a lower temperature medium to drop the heat in) and (b) replace the need to actually have the outside cooling (which may b problematic - if the air is 22 degree, that is a LOT to fight off, but OTOH the water will be quite cold). I also would possibly save the investment for the outside part of the cooling circuit. Now, second question - is there a feasible way to heat a house with that? ;) After all, brutally speaking, it is a LOT of energy in that water ;) If it is a bad idea, I stop here - if it is not, I start looking for suppliers. Maybe my math is wrong?

    Read the article

  • Display is slightly blurry on (native) 1920x1080 resolution

    - by Martin Tuskevicius
    I have a computer monitor that is approximately 23" in size. Its native resolution is 1920x1080, and Windows 7 will not allow it to be any higher. However, I cannot make the resolution a little lower as well. When I right-click on my desktop and select 'Screen resolution,' the vertical slider has only two options: 1920x1080 and 1280x720. There are no real problems that I am having besides the fact that the image is slightly blurry. I can easily make things out and see them, but I definitely feel that the image is not as clear as it could be. My graphics card is ATI Radeon HD 5450 and it has the latest graphics drivers installed. I've tried playing around with the AMD VISION Engine Control Center to see if I can change an option to make the image clearer, but I had no luck. I did find one odd thing, though. When I lowered the refresh rate from 60Hz to 50Hz, the image kind of "zoomed in" but it also became perfectly clear like I would expect it to look. The problem is that when I use 50Hz, the image zooms in a little on the center and I lose maybe an inch and a half of the screen (I do not see the bar at the top of applications, I do not see the Windows taskbar thing, etc). I figured if I could somehow zoom in so that the entire image fills the screen (not the slightly cropped version) then I would have the perfectly crisp image of 50Hz, and also the uncropped image of 60Hz. However, upon zooming in, the image began to look blurry again just like it did with 60Hz. So I am at a loss here. I do not know how to make the image look as clear as it should. I have the latest drivers (I updated them today) and I know that my monitor supports the resolution that I am trying to use. Has anybody experienced something like this before? I'd really appreciate any input - thanks! Update: I have figured out how to make the display look crisp! I set it to the 50Hz option, and then I changed the scaling through the monitor itself, rather than software. Now, however, I am finding that games look pretty bad because since it is clear, the lower quality really becomes apparent. I cannot run new games at 1080p, so I run them at the lowest resolution possible (1280x720, since it is the only other option offered, as I have mentioned). So I am wondering, is there a way to have Windows display more resolution options?

    Read the article

  • A name was started with an invalid character. Error processing resource

    - by Gallen
    Here is the exact error I'm getting when I try to launch my default.aspx file from the published folder. Can anybody point me in the right direction? The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- A name was started with an invalid character. Error processing resource 'file:///C:/inetpub/wwwroot/MHNProServices/Default.... <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs"... Here are the contents of default.aspx <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="False" CodeBehind="Default.aspx.cs" Inherits="MHNProServices.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <link type="text/css" href="css/Default.css" rel="Stylesheet" /> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id="contentHead"> <img src="css/img/heading_landing.jpg" /> </div> <div id="contentTop"></div> <div id="content"> <div id="contentLeft"> <asp:Image ID="displayPicture" runat="server" /> <img id="displayOverlay"src="css/img/profilepicture_overlay.gif" /> <a id="contentButton_makeAppointment" href="Appointments.aspx?step=start"></a> <a id="contentButton_cancelAppointment" href="Appointments.aspx?step=cancel"></a> </div> <div id="contentRight"> <h3><asp:Label ID="lbl_homepageHeader" runat="server" Text=""></asp:Label></h3> <hr /> <asp:Label ID="lbl_homepageContent" runat="server" Text=""></asp:Label> </div> </div> <div id="contentBottom"></div> </asp:Content>

    Read the article

  • ffmpeg - join / merge on top of each other

    - by AisIceEyes
    I'm trying to join together two videos on top of each other. I already did these two ffmpeg commands ffmpeg -i 2_Out_of_Control.VOB -aspect 16:9 \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1280:720,boxblur=lp=13" \ -c:v libx264 -preset medium \ -c:a copy \ '2(blurred)Out_of_Control.mp4' ffmpeg -i 2_Out_of_Control.VOB \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1080:720" \ -c:v libx264 -preset medium \ -c:a copy \ '2(clear)Out_of_Control.mp4' I'm currently stuck on making the "clear" version on top of the "blurred" version. I'm not sure how to do that. Can anybody help please? Been googling for around 2 days already. Only achieved it by using OpenShot but yeah, would prefer if there is an ffmpeg command to merge the two videos on top of each other. Edit: I want the "clear" video to be at the center at the top of the "blurred" video Edit2: console output would be the same as above: ffmpeg -i 2(blurred)Out_of_Control.mp4 \ -i 2(clear)Out_of_Control.mp4 \ -aspect 16:9 \ -vf <just something that will join the two together: the blurred at the bottom, clear at top that is centered> \ -c:v libx264 -preset medium \ -c:a copy \ '2_Out_of_Control_VOB.mp4' Edit3: here is the output when I used ffmpeg -i 2_Out_of_Control.VOB $ ffmpeg -i 2_Out_of_Control.VOB ffmpeg version git-2013-10-03-c7fe2a3 Copyright (c) 2000-2013 the FFmpeg developers built on Oct 4 2013 05:22:06 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --prefix=/home/username/ffmpeg_build --extra-cflags=-I/home/username/ffmpeg_build/include --extra-ldflags=-L/home/username/ffmpeg_build/lib --bindir=/home/username/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab libavutil 52. 46.100 / 52. 46.100 libavcodec 55. 34.100 / 55. 34.100 libavformat 55. 19.100 / 55. 19.100 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 88.101 / 3. 88.101 libswscale 2. 5.100 / 2. 5.100 libswresample 0. 17.103 / 0. 17.103 libpostproc 52. 3.100 / 52. 3.100 Input #0, mpeg, from '2_Out_of_Control.VOB': Duration: 00:05:00.01, start: 0.500000, bitrate: 4574 kb/s Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, smpte170m), 720x480 [SAR 8:9 DAR 4:3], max. 9334 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0:1[0x80]: Audio: ac3, 48000 Hz, stereo, fltp, 384 kb/s At least one output file must be specified $

    Read the article

  • Hosting the Razor Engine for Templating in Non-Web Applications

    - by Rick Strahl
    Microsoft’s new Razor HTML Rendering Engine that is currently shipping with ASP.NET MVC previews can be used outside of ASP.NET. Razor is an alternative view engine that can be used instead of the ASP.NET Page engine that currently works with ASP.NET WebForms and MVC. It provides a simpler and more readable markup syntax and is much more light weight in terms of functionality than the full blown WebForms Page engine, focusing only on features that are more along the lines of a pure view engine (or classic ASP!) with focus on expression and code rendering rather than a complex control/object model. Like the Page engine though, the parser understands .NET code syntax which can be embedded into templates, and behind the scenes the engine compiles markup and script code into an executing piece of .NET code in an assembly. Although it ships as part of the ASP.NET MVC and WebMatrix the Razor Engine itself is not directly dependent on ASP.NET or IIS or HTTP in any way. And although there are some markup and rendering features that are optimized for HTML based output generation, Razor is essentially a free standing template engine. And what’s really nice is that unlike the ASP.NET Runtime, Razor is fairly easy to host inside of your own non-Web applications to provide templating functionality. Templating in non-Web Applications? Yes please! So why might you host a template engine in your non-Web application? Template rendering is useful in many places and I have a number of applications that make heavy use of it. One of my applications – West Wind Html Help Builder - exclusively uses template based rendering to merge user supplied help text content into customizable and executable HTML markup templates that provide HTML output for CHM style HTML Help. This is an older product and it’s not actually using .NET at the moment – and this is one reason I’m looking at Razor for script hosting at the moment. For a few .NET applications though I’ve actually used the ASP.NET Runtime hosting to provide templating and mail merge style functionality and while that works reasonably well it’s a very heavy handed approach. It’s very resource intensive and has potential issues with versioning in various different versions of .NET. The generic implementation I created in the article above requires a lot of fix up to mimic an HTTP request in a non-HTTP environment and there are a lot of little things that have to happen to ensure that the ASP.NET runtime works properly most of it having nothing to do with the templating aspect but just satisfying ASP.NET’s requirements. The Razor Engine on the other hand is fairly light weight and completely decoupled from the ASP.NET runtime and the HTTP processing. Rather it’s a pure template engine whose sole purpose is to render text templates. Hosting this engine in your own applications can be accomplished with a reasonable amount of code (actually just a few lines with the tools I’m about to describe) and without having to fake HTTP requests. It’s also much lighter on resource usage and you can easily attach custom properties to your base template implementation to easily pass context from the parent application into templates all of which was rather complicated with ASP.NET runtime hosting. Installing the Razor Template Engine You can get Razor as part of the MVC 3 (RC and later) or Web Matrix. Both are available as downloadable components from the Web Platform Installer Version 3.0 (!important – V2 doesn’t show these components). If you already have that version of the WPI installed just fire it up. You can get the latest version of the Web Platform Installer from here: http://www.microsoft.com/web/gallery/install.aspx Once the platform Installer 3.0 is installed install either MVC 3 or ASP.NET Web Pages. Once installed you’ll find a System.Web.Razor assembly in C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\System.Web.Razor.dll which you can add as a reference to your project. Creating a Wrapper The basic Razor Hosting API is pretty simple and you can host Razor with a (large-ish) handful of lines of code. I’ll show the basics of it later in this article. However, if you want to customize the rendering and handle assembly and namespace includes for the markup as well as deal with text and file inputs as well as forcing Razor to run in a separate AppDomain so you can unload the code-generated assemblies and deal with assembly caching for re-used templates little more work is required to create something that is more easily reusable. For this reason I created a Razor Hosting wrapper project that combines a bunch of this functionality into an easy to use hosting class, a hosting factory that can load the engine in a separate AppDomain and a couple of hosting containers that provided folder based and string based caching for templates for an easily embeddable and reusable engine with easy to use syntax. If you just want the code and play with the samples and source go grab the latest code from the Subversion Repository at: http://www.west-wind.com:8080/svn/articles/trunk/RazorHosting/ or a snapshot from: http://www.west-wind.com/files/tools/RazorHosting.zip Getting Started Before I get into how hosting with Razor works, let’s take a look at how you can get up and running quickly with the wrapper classes provided. It only takes a few lines of code. The easiest way to use these Razor Hosting Wrappers is to use one of the two HostContainers provided. One is for hosting Razor scripts in a directory and rendering them as relative paths from these script files on disk. The other HostContainer serves razor scripts from string templates… Let’s start with a very simple template that displays some simple expressions, some code blocks and demonstrates rendering some data from contextual data that you pass to the template in the form of a ‘context’. Here’s a simple Razor template: @using System.Reflection Hello @Context.FirstName! Your entry was entered on: @Context.Entered @{ // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); } AppDomain Id: @AppDomain.CurrentDomain.FriendlyName Assembly: @Assembly.GetExecutingAssembly().FullName Code based output: @{ // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } Response.Write(output); } Pretty easy to see what’s going on here. The only unusual thing in this code is the Context object which is an arbitrary object I’m passing from the host to the template by way of the template base class. I’m also displaying the current AppDomain and the executing Assembly name so you can see how compiling and running a template actually loads up new assemblies. Also note that as part of my context I’m passing a reference to the current Windows Form down to the template and changing the title from within the script. It’s a silly example, but it demonstrates two-way communication between host and template and back which can be very powerful. The easiest way to quickly render this template is to use the RazorEngine<TTemplateBase> class. The generic parameter specifies a template base class type that is used by Razor internally to generate the class it generates from a template. The default implementation provided in my RazorHosting wrapper is RazorTemplateBase. Here’s a simple one that renders from a string and outputs a string: var engine = new RazorEngine<RazorTemplateBase>(); // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; string output = engine.RenderTemplate(this.txtSource.Text new string[] { "System.Windows.Forms.dll" }, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; Simple enough. This code renders a template from a string input and returns a result back as a string. It  creates a custom context and passes that to the template which can then access the Context’s properties. Note that anything passed as ‘context’ must be serializable (or MarshalByRefObject) – otherwise you get an exception when passing the reference over AppDomain boundaries (discussed later). Passing a context is optional, but is a key feature in being able to share data between the host application and the template. Note that we use the Context object to access FirstName, Entered and even the host Windows Form object which is used in the template to change the Window caption from within the script! In the code above all the work happens in the RenderTemplate method which provide a variety of overloads to read and write to and from strings, files and TextReaders/Writers. Here’s another example that renders from a file input using a TextReader: using (reader = new StreamReader("templates\\simple.csHtml", true)) { result = host.RenderTemplate(reader, new string[] { "System.Windows.Forms.dll" }, this.CustomContext); } RenderTemplate() is fairly high level and it handles loading of the runtime, compiling into an assembly and rendering of the template. If you want more control you can use the lower level methods to control each step of the way which is important for the HostContainers I’ll discuss later. Basically for those scenarios you want to separate out loading of the engine, compiling into an assembly and then rendering the template from the assembly. Why? So we can keep assemblies cached. In the code above a new assembly is created for each template rendered which is inefficient and uses up resources. Depending on the size of your templates and how often you fire them you can chew through memory very quickly. This slighter lower level approach is only a couple of extra steps: // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; var engine = new RazorEngine<RazorTemplateBase>(); string assId = null; using (StringReader reader = new StringReader(this.txtSource.Text)) { assId = engine.ParseAndCompileTemplate(new string[] { "System.Windows.Forms.dll" }, reader); } string output = engine.RenderTemplateFromAssembly(assId, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; The difference here is that you can capture the assembly – or rather an Id to it – and potentially hold on to it to render again later assuming the template hasn’t changed. The HostContainers take advantage of this feature to cache the assemblies based on certain criteria like a filename and file time step or a string hash that if not change indicate that an assembly can be reused. Note that ParseAndCompileTemplate returns an assembly Id rather than the assembly itself. This is done so that that the assembly always stays in the host’s AppDomain and is not passed across AppDomain boundaries which would cause load failures. We’ll talk more about this in a minute but for now just realize that assemblies references are stored in a list and are accessible by this ID to allow locating and re-executing of the assembly based on that id. Reuse of the assembly avoids recompilation overhead and creation of yet another assembly that loads into the current AppDomain. You can play around with several different versions of the above code in the main sample form:   Using Hosting Containers for more Control and Caching The above examples simply render templates into assemblies each and every time they are executed. While this works and is even reasonably fast, it’s not terribly efficient. If you render templates more than once it would be nice if you could cache the generated assemblies for example to avoid re-compiling and creating of a new assembly each time. Additionally it would be nice to load template assemblies into a separate AppDomain optionally to be able to be able to unload assembli es and also to protect your host application from scripting attacks with malicious template code. Hosting containers provide also provide a wrapper around the RazorEngine<T> instance, a factory (which allows creation in separate AppDomains) and an easy way to start and stop the container ‘runtime’. The Razor Hosting samples provide two hosting containers: RazorFolderHostContainer and StringHostContainer. The folder host provides a simple runtime environment for a folder structure similar in the way that the ASP.NET runtime handles a virtual directory as it’s ‘application' root. Templates are loaded from disk in relative paths and the resulting assemblies are cached unless the template on disk is changed. The string host also caches templates based on string hashes – if the same string is passed a second time a cached version of the assembly is used. Here’s how HostContainers work. I’ll use the FolderHostContainer because it’s likely the most common way you’d use templates – from disk based templates that can be easily edited and maintained on disk. The first step is to create an instance of it and keep it around somewhere (in the example it’s attached as a property to the Form): RazorFolderHostContainer Host = new RazorFolderHostContainer(); public RazorFolderHostForm() { InitializeComponent(); // The base path for templates - templates are rendered with relative paths // based on this path. Host.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Add any assemblies you want reference in your templates Host.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container Host.Start(); } Next anytime you want to render a template you can use simple code like this: private void RenderTemplate(string fileName) { // Pass the template path via the Context var relativePath = Utilities.GetRelativePath(fileName, Host.TemplatePath); if (!Host.RenderTemplate(relativePath, this.Context, Host.RenderingOutputFile)) { MessageBox.Show("Error: " + Host.ErrorMessage); return; } this.webBrowser1.Navigate("file://" + Host.RenderingOutputFile); } You can also render the output to a string instead of to a file: string result = Host.RenderTemplateToString(relativePath,context); Finally if you want to release the engine and shut down the hosting AppDomain you can simply do: Host.Stop(); Stopping the AppDomain and restarting it (ie. calling Stop(); followed by Start()) is also a nice way to release all resources in the AppDomain. The FolderBased domain also supports partial Rendering based on root path based relative paths with the same caching characteristics as the main templates. From within a template you can call out to a partial like this: @RenderPartial(@"partials\PartialRendering.cshtml", Context) where partials\PartialRendering.cshtml is a relative to the template root folder. The folder host example lets you load up templates from disk and display the result in a Web Browser control which demonstrates using Razor HTML output from templates that contain HTML syntax which happens to me my target scenario for Html Help Builder.   The Razor Engine Wrapper Project The project I created to wrap Razor hosting has a fair bit of code and a number of classes associated with it. Most of the components are internally used and as you can see using the final RazorEngine<T> and HostContainer classes is pretty easy. The classes are extensible and I suspect developers will want to build more customized host containers for their applications. Host containers are the key to wrapping up all functionality – Engine, BaseTemplate, AppDomain Hosting, Caching etc in a logical piece that is ready to be plugged into an application. When looking at the code there are a couple of core features provided: Core Razor Engine Hosting This is the core Razor hosting which provides the basics of loading a template, compiling it into an assembly and executing it. This is fairly straightforward, but without a host container that can cache assemblies based on some criteria templates are recompiled and re-created each time which is inefficient (although pretty fast). The base engine wrapper implementation also supports hosting the Razor runtime in a separate AppDomain for security and the ability to unload it on demand. Host Containers The engine hosting itself doesn’t provide any sort of ‘runtime’ service like picking up files from disk, caching assemblies and so forth. So my implementation provides two HostContainers: RazorFolderHostContainer and RazorStringHostContainer. The FolderHost works off a base directory and loads templates based on relative paths (sort of like the ASP.NET runtime does off a virtual). The HostContainers also deal with caching of template assemblies – for the folder host the file date is tracked and checked for updates and unless the template is changed a cached assembly is reused. The StringHostContainer similiarily checks string hashes to figure out whether a particular string template was previously compiled and executed. The HostContainers also act as a simple startup environment and a single reference to easily store and reuse in an application. TemplateBase Classes The template base classes are the base classes that from which the Razor engine generates .NET code. A template is parsed into a class with an Execute() method and the class is based on this template type you can specify. RazorEngine<TBaseTemplate> can receive this type and the HostContainers default to specific templates in their base implementations. Template classes are customizable to allow you to create templates that provide application specific features and interaction from the template to your host application. How does the RazorEngine wrapper work? You can browse the source code in the links above or in the repository or download the source, but I’ll highlight some key features here. Here’s part of the RazorEngine implementation that can be used to host the runtime and that demonstrates the key code required to host the Razor runtime. The RazorEngine class is implemented as a generic class to reflect the Template base class type: public class RazorEngine<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase The generic type is used to internally provide easier access to the template type and assignments on it as part of the template processing. The class also inherits MarshalByRefObject to allow execution over AppDomain boundaries – something that all the classes discussed here need to do since there is much interaction between the host and the template. The first two key methods deal with creating a template assembly: /// <summary> /// Creates an instance of the RazorHost with various options applied. /// Applies basic namespace imports and the name of the class to generate /// </summary> /// <param name="generatedNamespace"></param> /// <param name="generatedClass"></param> /// <returns></returns> protected RazorTemplateEngine CreateHost(string generatedNamespace, string generatedClass) { Type baseClassType = typeof(TBaseTemplateType); RazorEngineHost host = new RazorEngineHost(new CSharpRazorCodeLanguage()); host.DefaultBaseClass = baseClassType.FullName; host.DefaultClassName = generatedClass; host.DefaultNamespace = generatedNamespace; host.NamespaceImports.Add("System"); host.NamespaceImports.Add("System.Text"); host.NamespaceImports.Add("System.Collections.Generic"); host.NamespaceImports.Add("System.Linq"); host.NamespaceImports.Add("System.IO"); return new RazorTemplateEngine(host); } /// <summary> /// Parses and compiles a markup template into an assembly and returns /// an assembly name. The name is an ID that can be passed to /// ExecuteTemplateByAssembly which picks up a cached instance of the /// loaded assembly. /// /// </summary> /// <param name="namespaceOfGeneratedClass">The namespace of the class to generate from the template</param> /// <param name="generatedClassName">The name of the class to generate from the template</param> /// <param name="ReferencedAssemblies">Any referenced assemblies by dll name only. Assemblies must be in execution path of host or in GAC.</param> /// <param name="templateSourceReader">Textreader that loads the template</param> /// <remarks> /// The actual assembly isn't returned here to allow for cross-AppDomain /// operation. If the assembly was returned it would fail for cross-AppDomain /// calls. /// </remarks> /// <returns>An assembly Id. The Assembly is cached in memory and can be used with RenderFromAssembly.</returns> public string ParseAndCompileTemplate( string namespaceOfGeneratedClass, string generatedClassName, string[] ReferencedAssemblies, TextReader templateSourceReader) { RazorTemplateEngine engine = CreateHost(namespaceOfGeneratedClass, generatedClassName); // Generate the template class as CodeDom GeneratorResults razorResults = engine.GenerateCode(templateSourceReader); // Create code from the codeDom and compile CSharpCodeProvider codeProvider = new CSharpCodeProvider(); CodeGeneratorOptions options = new CodeGeneratorOptions(); // Capture Code Generated as a string for error info // and debugging LastGeneratedCode = null; using (StringWriter writer = new StringWriter()) { codeProvider.GenerateCodeFromCompileUnit(razorResults.GeneratedCode, writer, options); LastGeneratedCode = writer.ToString(); } CompilerParameters compilerParameters = new CompilerParameters(ReferencedAssemblies); // Standard Assembly References compilerParameters.ReferencedAssemblies.Add("System.dll"); compilerParameters.ReferencedAssemblies.Add("System.Core.dll"); compilerParameters.ReferencedAssemblies.Add("Microsoft.CSharp.dll"); // dynamic support! // Also add the current assembly so RazorTemplateBase is available compilerParameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().CodeBase.Substring(8)); compilerParameters.GenerateInMemory = Configuration.CompileToMemory; if (!Configuration.CompileToMemory) compilerParameters.OutputAssembly = Path.Combine(Configuration.TempAssemblyPath, "_" + Guid.NewGuid().ToString("n") + ".dll"); CompilerResults compilerResults = codeProvider.CompileAssemblyFromDom(compilerParameters, razorResults.GeneratedCode); if (compilerResults.Errors.Count > 0) { var compileErrors = new StringBuilder(); foreach (System.CodeDom.Compiler.CompilerError compileError in compilerResults.Errors) compileErrors.Append(String.Format(Resources.LineX0TColX1TErrorX2RN, compileError.Line, compileError.Column, compileError.ErrorText)); this.SetError(compileErrors.ToString() + "\r\n" + LastGeneratedCode); return null; } AssemblyCache.Add(compilerResults.CompiledAssembly.FullName, compilerResults.CompiledAssembly); return compilerResults.CompiledAssembly.FullName; } Think of the internal CreateHost() method as setting up the assembly generated from each template. Each template compiles into a separate assembly. It sets up namespaces, and assembly references, the base class used and the name and namespace for the generated class. ParseAndCompileTemplate() then calls the CreateHost() method to receive the template engine generator which effectively generates a CodeDom from the template – the template is turned into .NET code. The code generated from our earlier example looks something like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace RazorTest { using System; using System.Text; using System.Collections.Generic; using System.Linq; using System.IO; using System.Reflection; public class RazorTemplate : RazorHosting.RazorTemplateBase { #line hidden public RazorTemplate() { } public override void Execute() { WriteLiteral("Hello "); Write(Context.FirstName); WriteLiteral("! Your entry was entered on: "); Write(Context.Entered); WriteLiteral("\r\n\r\n"); // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); WriteLiteral("\r\nAppDomain Id:\r\n "); Write(AppDomain.CurrentDomain.FriendlyName); WriteLiteral("\r\n \r\nAssembly:\r\n "); Write(Assembly.GetExecutingAssembly().FullName); WriteLiteral("\r\n\r\nCode based output: \r\n"); // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } } } } Basically the template’s body is turned into code in an Execute method that is called. Internally the template’s Write method is fired to actually generate the output. Note that the class inherits from RazorTemplateBase which is the generic parameter I used to specify the base class when creating an instance in my RazorEngine host: var engine = new RazorEngine<RazorTemplateBase>(); This template class must be provided and it must implement an Execute() and Write() method. Beyond that you can create any class you chose and attach your own properties. My RazorTemplateBase class implementation is very simple: public class RazorTemplateBase : MarshalByRefObject, IDisposable { /// <summary> /// You can pass in a generic context object /// to use in your template code /// </summary> public dynamic Context { get; set; } /// <summary> /// Class that generates output. Currently ultra simple /// with only Response.Write() implementation. /// </summary> public RazorResponse Response { get; set; } public object HostContainer {get; set; } public object Engine { get; set; } public RazorTemplateBase() { Response = new RazorResponse(); } public virtual void Write(object value) { Response.Write(value); } public virtual void WriteLiteral(object value) { Response.Write(value); } /// <summary> /// Razor Parser implements this method /// </summary> public virtual void Execute() {} public virtual void Dispose() { if (Response != null) { Response.Dispose(); Response = null; } } } Razor fills in the Execute method when it generates its subclass and uses the Write() method to output content. As you can see I use a RazorResponse() class here to generate output. This isn’t necessary really, as you could use a StringBuilder or StringWriter() directly, but I prefer using Response object so I can extend the Response behavior as needed. The RazorResponse class is also very simple and merely acts as a wrapper around a TextWriter: public class RazorResponse : IDisposable { /// <summary> /// Internal text writer - default to StringWriter() /// </summary> public TextWriter Writer = new StringWriter(); public virtual void Write(object value) { Writer.Write(value); } public virtual void WriteLine(object value) { Write(value); Write("\r\n"); } public virtual void WriteFormat(string format, params object[] args) { Write(string.Format(format, args)); } public override string ToString() { return Writer.ToString(); } public virtual void Dispose() { Writer.Close(); } public virtual void SetTextWriter(TextWriter writer) { // Close original writer if (Writer != null) Writer.Close(); Writer = writer; } } The Rendering Methods of RazorEngine At this point I’ve talked about the assembly generation logic and the template implementation itself. What’s left is that once you’ve generated the assembly is to execute it. The code to do this is handled in the various RenderXXX methods of the RazorEngine class. Let’s look at the lowest level one of these which is RenderTemplateFromAssembly() and a couple of internal support methods that handle instantiating and invoking of the generated template method: public string RenderTemplateFromAssembly( string assemblyId, string generatedNamespace, string generatedClass, object context, TextWriter outputWriter) { this.SetError(); Assembly generatedAssembly = AssemblyCache[assemblyId]; if (generatedAssembly == null) { this.SetError(Resources.PreviouslyCompiledAssemblyNotFound); return null; } string className = generatedNamespace + "." + generatedClass; Type type; try { type = generatedAssembly.GetType(className); } catch (Exception ex) { this.SetError(Resources.UnableToCreateType + className + ": " + ex.Message); return null; } // Start with empty non-error response (if we use a writer) string result = string.Empty; using(TBaseTemplateType instance = InstantiateTemplateClass(type)) { if (instance == null) return null; if (outputWriter != null) instance.Response.SetTextWriter(outputWriter); if (!InvokeTemplateInstance(instance, context)) return null; // Capture string output if implemented and return // otherwise null is returned if (outputWriter == null) result = instance.Response.ToString(); } return result; } protected virtual TBaseTemplateType InstantiateTemplateClass(Type type) { TBaseTemplateType instance = Activator.CreateInstance(type) as TBaseTemplateType; if (instance == null) { SetError(Resources.CouldnTActivateTypeInstance + type.FullName); return null; } instance.Engine = this; // If a HostContainer was set pass that to the template too instance.HostContainer = this.HostContainer; return instance; } /// <summary> /// Internally executes an instance of the template, /// captures errors on execution and returns true or false /// </summary> /// <param name="instance">An instance of the generated template</param> /// <returns>true or false - check ErrorMessage for errors</returns> protected virtual bool InvokeTemplateInstance(TBaseTemplateType instance, object context) { try { instance.Context = context; instance.Execute(); } catch (Exception ex) { this.SetError(Resources.TemplateExecutionError + ex.Message); return false; } finally { // Must make sure Response is closed instance.Response.Dispose(); } return true; } The RenderTemplateFromAssembly method basically requires the namespace and class to instantate and creates an instance of the class using InstantiateTemplateClass(). It then invokes the method with InvokeTemplateInstance(). These two methods are broken out because they are re-used by various other rendering methods and also to allow subclassing and providing additional configuration tasks to set properties and pass values to templates at execution time. In the default mode instantiation sets the Engine and HostContainer (discussed later) so the template can call back into the template engine, and the context is set when the template method is invoked. The various RenderXXX methods use similar code although they create the assemblies first. If you’re after potentially cashing assemblies the method is the one to call and that’s exactly what the two HostContainer classes do. More on that in a minute, but before we get into HostContainers let’s talk about AppDomain hosting and the like. Running Templates in their own AppDomain With the RazorEngine class above, when a template is parsed into an assembly and executed the assembly is created (in memory or on disk – you can configure that) and cached in the current AppDomain. In .NET once an assembly has been loaded it can never be unloaded so if you’re loading lots of templates and at some time you want to release them there’s no way to do so. If however you load the assemblies in a separate AppDomain that new AppDomain can be unloaded and the assemblies loaded in it with it. In order to host the templates in a separate AppDomain the easiest thing to do is to run the entire RazorEngine in a separate AppDomain. Then all interaction occurs in the other AppDomain and no further changes have to be made. To facilitate this there is a RazorEngineFactory which has methods that can instantiate the RazorHost in a separate AppDomain as well as in the local AppDomain. The host creates the remote instance and then hangs on to it to keep it alive as well as providing methods to shut down the AppDomain and reload the engine. Sounds complicated but cross-AppDomain invocation is actually fairly easy to implement. Here’s some of the relevant code from the RazorEngineFactory class. Like the RazorEngine this class is generic and requires a template base type in the generic class name: public class RazorEngineFactory<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase Here are the key methods of interest: /// <summary> /// Creates an instance of the RazorHost in a new AppDomain. This /// version creates a static singleton that that is cached and you /// can call UnloadRazorHostInAppDomain to unload it. /// </summary> /// <returns></returns> public static RazorEngine<TBaseTemplateType> CreateRazorHostInAppDomain() { if (Current == null) Current = new RazorEngineFactory<TBaseTemplateType>(); return Current.GetRazorHostInAppDomain(); } public static void UnloadRazorHostInAppDomain() { if (Current != null) Current.UnloadHost(); Current = null; } /// <summary> /// Instance method that creates a RazorHost in a new AppDomain. /// This method requires that you keep the Factory around in /// order to keep the AppDomain alive and be able to unload it. /// </summary> /// <returns></returns> public RazorEngine<TBaseTemplateType> GetRazorHostInAppDomain() { LocalAppDomain = CreateAppDomain(null); if (LocalAppDomain == null) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! RazorEngine<TBaseTemplateType> host = null; try { Assembly ass = Assembly.GetExecutingAssembly(); string AssemblyPath = ass.Location; host = (RazorEngine<TBaseTemplateType>) LocalAppDomain.CreateInstanceFrom(AssemblyPath, typeof(RazorEngine<TBaseTemplateType>).FullName).Unwrap(); } catch (Exception ex) { ErrorMessage = ex.Message; return null; } return host; } /// <summary> /// Internally creates a new AppDomain in which Razor templates can /// be run. /// </summary> /// <param name="appDomainName"></param> /// <returns></returns> private AppDomain CreateAppDomain(string appDomainName) { if (appDomainName == null) appDomainName = "RazorHost_" + Guid.NewGuid().ToString("n"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; AppDomain localDomain = AppDomain.CreateDomain(appDomainName, null, setup); return localDomain; } /// <summary> /// Allow unloading of the created AppDomain to release resources /// All internal resources in the AppDomain are released including /// in memory compiled Razor assemblies. /// </summary> public void UnloadHost() { if (this.LocalAppDomain != null) { AppDomain.Unload(this.LocalAppDomain); this.LocalAppDomain = null; } } The static CreateRazorHostInAppDomain() is the key method that startup code usually calls. It uses a Current singleton instance to an instance of itself that is created cross AppDomain and is kept alive because it’s static. GetRazorHostInAppDomain actually creates a cross-AppDomain instance which first creates a new AppDomain and then loads the RazorEngine into it. The remote Proxy instance is returned as a result to the method and can be used the same as a local instance. The code to run with a remote AppDomain is simple: private RazorEngine<RazorTemplateBase> CreateHost() { if (this.Host != null) return this.Host; // Use Static Methods - no error message if host doesn't load this.Host = RazorEngineFactory<RazorTemplateBase>.CreateRazorHostInAppDomain(); if (this.Host == null) { MessageBox.Show("Unable to load Razor Template Host", "Razor Hosting", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } return this.Host; } This code relies on a local reference of the Host which is kept around for the duration of the app (in this case a form reference). To use this you’d simply do: this.Host = CreateHost(); if (host == null) return; string result = host.RenderTemplate( this.txtSource.Text, new string[] { "System.Windows.Forms.dll", "Westwind.Utilities.dll" }, this.CustomContext); if (result == null) { MessageBox.Show(host.ErrorMessage, "Template Execution Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); return; } this.txtResult.Text = result; Now all templates run in a remote AppDomain and can be unloaded with simple code like this: RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Host = null; One Step further – Providing a caching ‘Runtime’ Once we can load templates in a remote AppDomain we can add some additional functionality like assembly caching based on application specific features. One of my typical scenarios is to render templates out of a scripts folder. So all templates live in a folder and they change infrequently. So a Folder based host that can compile these templates once and then only recompile them if something changes would be ideal. Enter host containers which are basically wrappers around the RazorEngine<t> and RazorEngineFactory<t>. They provide additional logic for things like file caching based on changes on disk or string hashes for string based template inputs. The folder host also provides for partial rendering logic through a custom template base implementation. There’s a base implementation in RazorBaseHostContainer, which provides the basics for hosting a RazorEngine, which includes the ability to start and stop the engine, cache assemblies and add references: public abstract class RazorBaseHostContainer<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase, new() { public RazorBaseHostContainer() { UseAppDomain = true; GeneratedNamespace = "__RazorHost"; } /// <summary> /// Determines whether the Container hosts Razor /// in a separate AppDomain. Seperate AppDomain /// hosting allows unloading and releasing of /// resources. /// </summary> public bool UseAppDomain { get; set; } /// <summary> /// Base folder location where the AppDomain /// is hosted. By default uses the same folder /// as the host application. /// /// Determines where binary dependencies are /// found for assembly references. /// </summary> public string BaseBinaryFolder { get; set; } /// <summary> /// List of referenced assemblies as string values. /// Must be in GAC or in the current folder of the host app/ /// base BinaryFolder /// </summary> public List<string> ReferencedAssemblies = new List<string>(); /// <summary> /// Name of the generated namespace for template classes /// </summary> public string GeneratedNamespace {get; set; } /// <summary> /// Any error messages /// </summary> public string ErrorMessage { get; set; } /// <summary> /// Cached instance of the Host. Required to keep the /// reference to the host alive for multiple uses. /// </summary> public RazorEngine<TBaseTemplateType> Engine; /// <summary> /// Cached instance of the Host Factory - so we can unload /// the host and its associated AppDomain. /// </summary> protected RazorEngineFactory<TBaseTemplateType> EngineFactory; /// <summary> /// Keep track of each compiled assembly /// and when it was compiled. /// /// Use a hash of the string to identify string /// changes. /// </summary> protected Dictionary<int, CompiledAssemblyItem> LoadedAssemblies = new Dictionary<int, CompiledAssemblyItem>(); /// <summary> /// Call to start the Host running. Follow by a calls to RenderTemplate to /// render individual templates. Call Stop when done. /// </summary> /// <returns>true or false - check ErrorMessage on false </returns> public virtual bool Start() { if (Engine == null) { if (UseAppDomain) Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHostInAppDomain(); else Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHost(); Engine.Configuration.CompileToMemory = true; Engine.HostContainer = this; if (Engine == null) { this.ErrorMessage = EngineFactory.ErrorMessage; return false; } } return true; } /// <summary> /// Stops the Host and releases the host AppDomain and cached /// assemblies. /// </summary> /// <returns>true or false</returns> public bool Stop() { this.LoadedAssemblies.Clear(); RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Engine = null; return true; } … } This base class provides most of the mechanics to host the runtime, but no application specific implementation for rendering. There are rendering functions but they just call the engine directly and provide no caching – there’s no context to decide how to cache and reuse templates. The key methods are Start and Stop and their main purpose is to start a new AppDomain (optionally) and shut it down when requested. The RazorFolderHostContainer – Folder Based Runtime Hosting Let’s look at the more application specific RazorFolderHostContainer implementation which is defined like this: public class RazorFolderHostContainer : RazorBaseHostContainer<RazorTemplateFolderHost> Note that a customized RazorTemplateFolderHost class template is used for this implementation that supports partial rendering in form of a RenderPartial() method that’s available to templates. The folder host’s features are: Render templates based on a Template Base Path (a ‘virtual’ if you will) Cache compiled assemblies based on the relative path and file time stamp File changes on templates cause templates to be recompiled into new assemblies Support for partial rendering using base folder relative pathing As shown in the startup examples earlier host containers require some startup code with a HostContainer tied to a persistent property (like a Form property): // The base path for templates - templates are rendered with relative paths // based on this path. HostContainer.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Default output rendering disk location HostContainer.RenderingOutputFile = Path.Combine(HostContainer.TemplatePath, "__Preview.htm"); // Add any assemblies you want reference in your templates HostContainer.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container HostContainer.Start(); Once that’s done, you can render templates with the host container: // Pass the template path for full filename seleted with OpenFile Dialog // relativepath is: subdir\file.cshtml or file.cshtml or ..\file.cshtml var relativePath = Utilities.GetRelativePath(fileName, HostContainer.TemplatePath); if (!HostContainer.RenderTemplate(relativePath, Context, HostContainer.RenderingOutputFile)) { MessageBox.Show("Error: " + HostContainer.ErrorMessage); return; } webBrowser1.Navigate("file://" + HostContainer.RenderingOutputFile); The most critical task of the RazorFolderHostContainer implementation is to retrieve a template from disk, compile and cache it and then deal with deciding whether subsequent requests need to re-compile the template or simply use a cached version. Internally the GetAssemblyFromFileAndCache() handles this task: /// <summary> /// Internally checks if a cached assembly exists and if it does uses it /// else creates and compiles one. Returns an assembly Id to be /// used with the LoadedAssembly list. /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> protected virtual CompiledAssemblyItem GetAssemblyFromFileAndCache(string relativePath) { string fileName = Path.Combine(TemplatePath, relativePath).ToLower(); int fileNameHash = fileName.GetHashCode(); if (!File.Exists(fileName)) { this.SetError(Resources.TemplateFileDoesnTExist + fileName); return null; } CompiledAssemblyItem item = null; this.LoadedAssemblies.TryGetValue(fileNameHash, out item); string assemblyId = null; // Check for cached instance if (item != null) { var fileTime = File.GetLastWriteTimeUtc(fileName); if (fileTime <= item.CompileTimeUtc) assemblyId = item.AssemblyId; } else item = new CompiledAssemblyItem(); // No cached instance - create assembly and cache if (assemblyId == null) { string safeClassName = GetSafeClassName(fileName); StreamReader reader = null; try { reader = new StreamReader(fileName, true); } catch (Exception ex) { this.SetError(Resources.ErrorReadingTemplateFile + fileName); return null; } assemblyId = Engine.ParseAndCompileTemplate(this.ReferencedAssemblies.ToArray(), reader); // need to ensure reader is closed if (reader != null) reader.Close(); if (assemblyId == null) { this.SetError(Engine.ErrorMessage); return null; } item.AssemblyId = assemblyId; item.CompileTimeUtc = DateTime.UtcNow; item.FileName = fileName; item.SafeClassName = safeClassName; this.LoadedAssemblies[fileNameHash] = item; } return item; } This code uses a LoadedAssembly dictionary which is comprised of a structure that holds a reference to a compiled assembly, a full filename and file timestamp and an assembly id. LoadedAssemblies (defined on the base class shown earlier) is essentially a cache for compiled assemblies and they are identified by a hash id. In the case of files the hash is a GetHashCode() from the full filename of the template. The template is checked for in the cache and if not found the file stamp is checked. If that’s newer than the cache’s compilation date the template is recompiled otherwise the version in the cache is used. All the core work defers to a RazorEngine<T> instance to ParseAndCompileTemplate(). The three rendering specific methods then are rather simple implementations with just a few lines of code dealing with parameter and return value parsing: /// <summary> /// Renders a template to a TextWriter. Useful to write output into a stream or /// the Response object. Used for partial rendering. /// </summary> /// <param name="relativePath">Relative path to the file in the folder structure</param> /// <param name="context">Optional context object or null</param> /// <param name="writer">The textwriter to write output into</param> /// <returns></returns> public bool RenderTemplate(string relativePath, object context, TextWriter writer) { // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; CompiledAssemblyItem item = GetAssemblyFromFileAndCache(relativePath); if (item == null) { writer.Close(); return false; } try { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error string result = Engine.RenderTemplateFromAssembly(item.AssemblyId, context, writer); if (result == null) { this.SetError(Engine.ErrorMessage); return false; } } catch (Exception ex) { this.SetError(ex.Message); return false; } finally { writer.Close(); } return true; } /// <summary> /// Render a template from a source file on disk to a specified outputfile. /// </summary> /// <param name="relativePath">Relative path off the template root folder. Format: path/filename.cshtml</param> /// <param name="context">Any object that will be available in the template as a dynamic of this.Context</param> /// <param name="outputFile">Optional - output file where output is written to. If not specified the /// RenderingOutputFile property is used instead /// </param> /// <returns>true if rendering succeeds, false on failure - check ErrorMessage</returns> public bool RenderTemplate(string relativePath, object context, string outputFile) { if (outputFile == null) outputFile = RenderingOutputFile; try { using (StreamWriter writer = new StreamWriter(outputFile, false, Engine.Configuration.OutputEncoding, Engine.Configuration.StreamBufferSize)) { return RenderTemplate(relativePath, context, writer); } } catch (Exception ex) { this.SetError(ex.Message); return false; } return true; } /// <summary> /// Renders a template to string. Useful for RenderTemplate /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> public string RenderTemplateToString(string relativePath, object context) { string result = string.Empty; try { using (StringWriter writer = new StringWriter()) { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error if (!RenderTemplate(relativePath, context, writer)) { this.SetError(Engine.ErrorMessage); return null; } result = writer.ToString(); } } catch (Exception ex) { this.SetError(ex.Message); return null; } return result; } The idea is that you can create custom host container implementations that do exactly what you want fairly easily. Take a look at both the RazorFolderHostContainer and RazorStringHostContainer classes for the basic concepts you can use to create custom implementations. Notice also that you can set the engine’s PerRequestConfigurationData() from the host container: // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; which when set to a non-null value is passed to the Template’s InitializeTemplate() method. This method receives an object parameter which you can cast as needed: public override void InitializeTemplate(object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; } With this data you can then configure any custom properties or objects on your main template class. It’s an easy way to pass data from the HostContainer all the way down into the template. The type you use is of type object so you have to cast it yourself, and it must be serializable since it will likely run in a separate AppDomain. This might seem like an ugly way to pass data around – normally I’d use an event delegate to call back from the engine to the host, but since this is running over AppDomain boundaries events get really tricky and passing a template instance back up into the host over AppDomain boundaries doesn’t work due to serialization issues. So it’s easier to pass the data from the host down into the template using this rather clumsy approach of set and forward. It’s ugly, but it’s something that can be hidden in the host container implementation as I’ve done here. It’s also not something you have to do in every implementation so this is kind of an edge case, but I know I’ll need to pass a bunch of data in some of my applications and this will be the easiest way to do so. Summing Up Hosting the Razor runtime is something I got jazzed up about quite a bit because I have an immediate need for this type of templating/merging/scripting capability in an application I’m working on. I’ve also been using templating in many apps and it’s always been a pain to deal with. The Razor engine makes this whole experience a lot cleaner and more light weight and with these wrappers I can now plug .NET based templating into my code literally with a few lines of code. That’s something to cheer about… I hope some of you will find this useful as well… Resources The examples and code require that you download the Razor runtimes. Projects are for Visual Studio 2010 running on .NET 4.0 Platform Installer 3.0 (install WebMatrix or MVC 3 for Razor Runtimes) Latest Code in Subversion Repository Download Snapshot of the Code Documentation (CHM Help File) © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  .NET  

    Read the article

  • Tools and Utilities for the .NET Developer

    - by mbcrump
    Tweet this list! Add a link to my site to your bookmarks to quickly find this page again! Add me to twitter! This is a list of the tools/utilities that I use to do my job/hobby. I wanted this page to load fast and contain information that only you care about. If I have missed a tool that you like, feel free to contact me and I will add it to the list. Also, this list took a lot of time to complete. Please do not steal my work, if you like the page then please link back to my site. I will keep the links/information updated as new tools/utilities are created.  Windows/.NET Development – This is a list of tools that any Windows/.NET developer should have in his bag. I have used at some point in my career everything listed on this page and below is the tools worth keeping. Name Description License AnkhSVN Subversion support for Visual Studio. It also works with VS2010. Free Aurora XAML Designer One of the best XAML creation tools available. Has a ton of built in templates that you can copy/paste into VS2010. COST/Trial BeyondCompare Beyond Compare 3 is the ideal tool for comparing files and folders on your Windows or Linux system. Visualize changes in your code and carefully reconcile them. COST/Trial BuildIT Automated Task Tool Its main purpose is to automate tasks, whether it is the final packaging of a product, an automated daily build, maybe sending out a mailing list, even backing-up files. Free C Sharper for VB Convert VB to C#. COST CLRProfiler Analyze and improve the behavior of your .NET app. Free CodeRush Direct competitor to ReSharper, contains similar feature. This is one of those decide for yourself. COST/Trial Disk2VHD Disk2vhd is a utility that creates VHD (Virtual Hard Disk - Microsoft's Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). Free Eazfuscator.NET Is a free obfuscator for .NET. The main purpose is to protect intellectual property of software. Free EQATEC Profiler Make your .NET app run faster. No source code changes are needed. Just point the profiler to your app, run the modified code, and get a visual report. COST Expression Studio 3/4 Comes with Web, Blend, Sketch Flow and more. You can create websites, produce beautiful XAML and more. COST/Trial Expresso The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. Free Fiddler Fiddler is a web debugging proxy which logs all HTTP(s) traffic between your computer and the internet. Free Firebug Powerful Web development tool. If you build websites, you will need this. Free FxCop FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Free GAC Browser and Remover Easy way to remove multiple assemblies from the GAC. Assemblies registered by programs like Install Shield can also be removed. Free GAC Util The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache. Free HelpScribble Help Scribble is a full-featured, easy-to-use help authoring tool for creating help files from start to finish. You can create Win Help (.hlp) files, HTML Help (.chm) files, a printed manual and online documentation (on a web site) all from the same Help Scribble project. COST/Trial IETester IETester is a free Web Browser that allows you to have the rendering and JavaScript engines of IE9 preview, IE8, IE7 IE 6 and IE5.5 on Windows 7, Vista and XP, as well as the installed IE in the same process. Free iTextSharp iText# (iTextSharp) is a port of the iText open source java library for PDF generation written entirely in C# for the .NET platform. Use the iText mailing list to get support. Free Kaxaml Kaxaml is a lightweight XAML editor that gives you a "split view" so you can see both your XAML and your rendered content. Free LINQPad LinqPad lets you interactively query databases in a LINQ. Free Linquer Many programmers are familiar with SQL and will need a help in the transition to LINQ. Sometimes there are complicated queries to be written and Linqer can help by converting SQL scripts to LINQ. COST/Trial LiquidXML Liquid XML Studio 2010 is an advanced XML developers toolkit and IDE, containing all the tools needed for designing and developing XML schema and applications. COST/Trial Log4Net log4net is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent log4j framework to the .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime. For more information on log4net see the features document. Free Microsoft Web Platform Installer The Microsoft Web Platform Installer 2.0 (Web PI) is a free tool that makes getting the latest components of the Microsoft Web Platform, including Internet Information Services (IIS), SQL Server Express, .NET Framework and Visual Web Developer easy. Free Mono Development Don't have Visual Studio - no problem! This is an open Source C# and .NET development environment for Linux, Windows, and Mac OS X Free Net Mass Downloader While it’s great that Microsoft has released the .NET Reference Source Code, you can only get it one file at a time while you’re debugging. If you’d like to batch download it for reading or to populate the cache, you’d have to write a program that instantiated and called each method in the Framework Class Library. Fortunately, .NET Mass Downloader comes to the rescue! Free nMap Nmap ("Network Mapper") is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Free NoScript (Firefox add-in) The NoScript Firefox extension provides extra protection for Firefox, Flock, Seamonkey and other Mozilla-based browsers: this free, open source add-on allows JavaScript, Java and Flash and other plug-ins to be executed only by trusted web sites of your choice (e.g. your online bank), and provides the most powerful Anti-XSS protection available in a browser. Free NotePad 2 Notepad2, a fast and light-weight Notepad-like text editor with syntax highlighting. This program can be run out of the box without installation, and does not touch your system's registry. Free PageSpy PageSpy is a small add-on for Internet Explorer that allows you to select any element within a webpage, select an option in the context menu, and view detailed information about both the coding behind the page and the element you selected. Free Phrase Express PhraseExpress manages your frequently used text snippets in customizable categories for quick access. Free PowerGui PowerGui is a free community for PowerGUI, a graphical user interface and script editor for Microsoft Windows PowerShell! Free Powershell Comes with Win7, but you can automate tasks by using the .NET Framework. Great for network admins. Free Process Explorer Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded. Also, included in the SysInterals Suite. Free Process Monitor Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. Free Reflector Explore and analyze compiled .NET assemblies, viewing them in C#, Visual Basic, and IL. This is an Essential for any .NET developer. Free Regular Expression Library Stuck on a Regular Expression but you think someone has already figured it out? Chances are they have. Free Regulator Regulator makes Regular Expressions easy. This is a must have for a .NET Developer. Free RenameMaestro RenameMaestro is probably the easiest batch file renamer you'll find to instantly rename multiple files COST ReSharper The one program that I cannot live without. Supports VS2010 and offers simple refactoring, code analysis/assistance/cleanup/templates. One of the few applications that is worth the $$$. COST/Trial ScrewTurn Wiki ScrewTurn Wiki allows you to create, manage and share wikis. A wiki is a collaboratively-edited, information-centered website: the most famous is Wikipedia. Free SharpDevelop What is #develop? SharpDevelop is a free IDE for C# and VB.NET projects on Microsoft's .NET platform. Free Show Me The Template Show Me The Template is a tool for exploring the templates, be their data, control or items panel, that comes with the controls built into WPF for all 6 themes. Free SnippetCompiler Compiles code snippets without opening Visual Studio. It does not support .NET 4. Free SQL Prompt SQL Prompt is a plug-in that increases how fast you can work with SQL. It provides code-completion for SQL server, reformatting, db schema information and snippets. Awesome! COST/Trial SQLinForm SQLinForm is an automatic SQL code formatter for all major databases  including ORACLE, SQL Server, DB2, UDB, Sybase, Informix, PostgreSQL, Teradata, MySQL, MS Access etc. with over 70 formatting options. COST/OnlineFree SSMS Tools SSMS Tools Pack is an add-in for Microsoft SQL Server Management Studio (SSMS) including SSMS Express. Free Storm STORM is a free and open source tool for testing web services. Free Telerik Code Convertor Convert code from VB to C Sharp and Vice Versa. Free TurtoiseSVN TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows.Since it's not an integration for a specific IDE you can use it with whatever development tools you like. Free UltraEdit UltraEdit is the ideal text, HTML and hex editor, and an advanced PHP, Perl, Java and JavaScript editor for programmers. UltraEdit is also an XML editor including a tree-style XML parser. An industry-award winner, UltraEdit supports disk-based 64-bit file handling (standard) on 32-bit Windows platforms (Windows 2000 and later). COST/Trial Virtual Windows XP Comes with some W7 version and allows you to run WinXP along side W7. Free VirtualBox Virtualization by Sun Microsystems. You can virtualize Windows, Linux and more. Free Visual Log Parser SQL queries against a variety of log files and other system data sources. Free WinMerge WinMerge is an Open Source differencing and merging tool for Windows. WinMerge can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle. Free Wireshark Wireshark is one of the best network protocol analyzer's for Unix and windows. This has been used several times to get me out of a bind. Free XML Notepad 07 Old, but still one of my favorite XML viewers. Free Productivity Tools – This is the list of tools that I use to save time or quickly navigate around Windows. Name Description License AutoHotKey Automate almost anything by sending keystrokes and mouse clicks. You can write a mouse or keyboard macro by hand or use the macro recorder. Free CLCL CLCL is clipboard caching utility. Free Ditto Ditto is an extension to the standard windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, html, custom formats, ..... Free Evernote Remember everything from notes to photos. It will synch between computers/devices. Free InfoRapid Inforapid is a search tool that will display all you search results in a html like browser. If you click on a word in that browser, it will start another search to the word you clicked on. Handy if you want to trackback something to it's true origin. The word you looked for will be highlighted in red. Clicking on the red word will open the containing file in a text based viewer. Clicking on any word in the opened document will start another search on that word. Free KatMouse The prime purpose of the KatMouse utility is to enhance the functionality of mice with a scroll wheel, offering 'universal' scrolling: moving the mouse wheel will scroll the window directly beneath the mouse cursor (not the one with the keyboard focus, which is default on Windows OSes). This is a major increase in the usefulness of the mouse wheel. Free ScreenR Instant Screencast with nothing to download. Works with Mac or PC and free. Free Start++ Start++ is an enhancement for the Start Menu in Windows Vista. It also extends the Run box and the command-line with customizable commands.  For example, typing "w Windows Vista" will take you to the Windows Vista page on Wikipedia! Free Synergy Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s). Free Texter Texter lets you define text substitution hot strings that, when triggered, will replace hotstring with a larger piece of text. By entering your most commonly-typed snippets of text into Texter, you can save countless keystrokes in the course of the day. Free Total Commander File handling, FTP, Archive handling and much more. Even works with Win3.11. COST/Trial Available Wizmouse WizMouse is a mouse enhancement utility that makes your mouse wheel work on the window currently under the mouse pointer, instead of the currently focused window. This means you no longer have to click on a window before being able to scroll it with the mouse wheel. This is a far more comfortable and practical way to make use of the mouse wheel. Free Xmarks Bookmark sync and search between computers. Free General Utilities – This is a list for power user users or anyone that wants more out of Windows. I usually install a majority of these whenever I get a new system. Name Description License µTorrent µTorrent is a lightweight and efficient BitTorrent client for Windows or Mac with many features. I use this for downloading LEGAL media. Free Audacity Audacity® is free, open source software for recording and editing sounds. It is available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems. Learn more about Audacity... Also check our Wiki and Forum for more information. Free AVast Free FREE Antivirus. Free CD Burner XP Pro CDBurnerXP is a free application to burn CDs and DVDs, including Blu-Ray and HD-DVDs. It also includes the feature to burn and create ISOs, as well as a multilanguage interface. Free CDEX You can extract digital audio CDs into mp3/wav. Free Combofix Combofix is a freeware (a legitimate spyware remover created by sUBs), Combofix was designed to scan a computer for known malware, spyware (SurfSideKick, QooLogic, and Look2Me as well as any other combination of the mentioned spyware applications) and remove them. Free Cpu-Z Provides information about some of the main devices of your system. Free Cropper Cropper is a screen capture utility written in C#. It makes it fast and easy to grab parts of your screen. Use it to easily crop out sections of vector graphic files such as Fireworks without having to flatten the files or open in a new editor. Use it to easily capture parts of a web site, including text and images. It's also great for writing documentation that needs images of your application or web site. Free DropBox Drag and Drop files to sync between computers. Free DVD-Fab Converts/Copies DVDs/Blu-Ray to different formats. (like mp4, mkv, avi) COST/Trial Available FastStone Capture FastStone Capture is a powerful, lightweight, yet full-featured screen capture tool that allows you to easily capture and annotate anything on the screen including windows, objects, menus, full screen, rectangular/freehand regions and even scrolling windows/web pages. Free ffdshow FFDShow is a DirectShow decoding filter for decompressing DivX, XviD, H.264, FLV1, WMV, MPEG-1 and MPEG-2, MPEG-4 movies. Free Filezilla FileZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface. You can also download a server version. Free FireFox Web Browser, do you really need an explanation? Free FireGestures A customizable mouse gestures extension which enables you to execute various commands and user scripts with five types of gestures. Free FoxIt Reader Light weight PDF viewer. You should install this with the advanced setting or it will install a toolbar and setup some shortcuts. Free gSynchIt Synch Gmail and Outlook. Even supports Outlook 2010 32/64 bit COST/Trial Available Hulu Desktop At home or in a hotel, this has replaced my cable/satellite subscription. Free ImgBurn ImgBurn is a lightweight CD / DVD / HD DVD / Blu-ray burning application that everyone should have in their toolkit! Free Infrarecorder InfraRecorder is a free CD/DVD burning solution for Microsoft Windows. It offers a wide range of powerful features; all through an easy to use application interface and Windows Explorer integration. Free KeePass KeePass is a free open source password manager, which helps you to manage your passwords in a secure way. Free LastPass Another password management, synchronize between browsers, automatic form filling and more. Free Live Essentials One download and lots of programs including Mail, Live Writer, Movie Maker and more! Free Monitores MonitorES is a small windows utility that helps you to turnoff monitor display when you lock down your machine.Also when you lock your machine, it will pause all your running media programs & set your IM status message to "Away" / Custom message(via options) and restore it back to normal when you back. Free mRemote mRemote is a full-featured, multi-tab remote connections manager. Free Open Office OpenOffice.org 3 is the leading open-source office software suite for word processing, spreadsheets, presentations, graphics, databases and more. It is available in many languages and works on all common computers. It stores all your data in an international open standard format and can also read and write files from other common office software packages. It can be downloaded and used completely free of charge for any purpose. Free Paint.NET Simple, intuitive, and innovative user interface for editing photos. Free Picasa Picasa is free photo editing software from Google that makes your pictures look great. Free Pidgin Pidgin is an easy to use and free chat client used by millions. Connect to AIM, MSN, Yahoo, and more chat networks all at once. Free PING PING is a live Linux ISO, based on the excellent Linux From Scratch (LFS) documentation. It can be burnt on a CD and booted, or integrated into a PXE / RIS environment. Free Putty PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. Free Revo Uninstaller Revo Uninstaller Pro helps you to uninstall software and remove unwanted programs installed on your computer easily! Even if you have problems uninstalling and cannot uninstall them from "Windows Add or Remove Programs" control panel applet.Revo Uninstaller is a much faster and more powerful alternative to "Windows Add or Remove Programs" applet! It has very powerful features to uninstall and remove programs. Free Security Essentials Microsoft Security Essentials is a new, free consumer anti-malware solution for your computer. Free SetupVirtualCloneDrive Virtual CloneDrive works and behaves just like a physical CD/DVD drive, however it exists only virtually. Point to the .ISO file and it appears in Windows Explorer as a Drive. Free Shark 007 Codec Pack Play just about any file format with this download. Also includes my W7 Media Playlist Generator. Free Snagit 9 Screen Capture on steroids. Add arrows, captions, etc to any screenshot. COST/Trial Available SysinternalsSuite Go ahead and download the entire sys internals suite. I have mentioned multiple programs in this suite already. Free TeraCopy TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features. Free for Home TrueCrypt Free open-source disk encryption software for Windows 7/Vista/XP, Mac OS X, and Linux Free TweetDeck Fully featured Twitter client. Free UltraVNC UltraVNC is a powerful, easy to use and free software that can display the screen of another computer (via internet or network) on your own screen. The program allows you to use your mouse and keyboard to control the other PC remotely. It means that you can work on a remote computer, as if you were sitting in front of it, right from your current location. Free Unlocker Unlocks locked files. Pretty simple right? Free VLC Media Player VLC media player is a highly portable multimedia player and multimedia framework capable of reading most audio and video formats Free Windows 7 Media Playlist This program is special to my heart because I wrote it. It has been mentioned on podcast and various websites. It allows you to quickly create wvx video playlist for Windows Media Center. Free WinRAR WinRAR is a powerful archive manager. It can backup your data and reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from Internet and create new archives in RAR and ZIP file format. COST/Trial Available Blogging – I use the following for my blog. Name Description License Insert Code for Windows Live Writer Insert Code for Windows Live Writer will format a snippet of text in a number of programming languages such as C#, HTML, MSH, JavaScript, Visual Basic and TSQL. Free LiveWriter Included in Live Essentials, but the ultimate in Windows Blogging Free PasteAsVSCode Plug-in for Windows Live Writer that pastes clipboard content as Visual Studio code. Preserves syntax highlighting, indentation and background color. Converts RTF, outputted by Visual Studio, into HTML. Free Desktop Management – The list below represent the best in Windows Desktop Management. Name Description License 7 Stacks Allows users to have "stacks" of icons in their taskbar. Free Executor Executor is a multi purpose launcher and a more advanced and customizable version of windows run. Free Fences Fences is a program that helps you organize your desktop and can hide your icons when they are not in use. Free RocketDock Rocket Dock is a smoothly animated, alpha blended application launcher. It provides a nice clean interface to drop shortcuts on for easy access and organization. With each item completely customizable there is no end to what you can add and launch from the dock. Free WindowsTab Tabbing is an essential feature of modern web browsers. Window Tabs brings the productivity of tabbed window management to all of your desktop applications. Free

    Read the article

  • XNA Notes 009

    - by George Clingerman
    This past week the MVPs (myself included) were on Microsoft campus for the MVP summit. So I apologize in advance if you did something cool or heard of something cool happening with XNA and XBLIGs and it’s not in my notes. I did my best to stay on top of things, but honestly this community is fast and furious with what it’s doing and creating. I really can’t keep up and that’s fantastic! But here’s what I *did* notice while I was there on Microsoft Campus (and I did make sure to point out to the XNA team several of these very cool happenings while I had their ears). Time Critical XNA News: The XNA team wants you to know that Dream Build Play registration is now open! http://blogs.msdn.com/b/xna/archive/2011/02/28/registration-now-open-for-dream-build-play-2011-challenge.aspx Join the XNA-UK create on March 24, 2011 at the Microsoft Tech Days Conference http://xna-uk.net/blogs/darkgenesis/archive/2011/02/27/join-the-xna-uk-crew-at-the-microsoft-tech-days-conference-on-24th-march-2011.aspx XNA Team: Shawn Hargreaves shares one of the coolest things that’s happened in the XNA community http://blogs.msdn.com/b/shawnhar/archive/2011/03/02/xbox-indies-pivot-view.aspx Nick Gravelyn continues his unique marketing/work prioritization strategy as he tries to get to 5,000 Pixel Man users before he makes Pixel Man 2 (and he’s almost there!) http://nickgravelyn.com/pixelman2/ XNA MVPs: A lot of the XNA MVPs were at the Microsoft MVP Summit 2011. Due to NDAs, most things can’t be shared, but I’m sure if you’re curious you could ask them about the general vibe and feeling they got from the team and the future of XNA/XBLIG and more. Catalin Zima and team release the free WP7 game Chickens Can Dream http://twitter.com/CatalinZima/statuses/41174062923390976 http://www.amusedsloth.com/2011/02/chickens-can-dream-is-live/ Charles Humphrey (NemoKrad) posts his March talk source and PowerPoint http://xna-uk.net/blogs/randomchaos/archive/2011/03/04/march-2011-talk-post-processing-framework.aspx XNA Developers: Michael B. McLaughlin posts about ANTS Memory Profile and creates a CheckMemoryAllocationGame sample (extremely useful if you’re looking to see how much memory some operation allocates!) http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx http://geekswithblogs.net/mikebmcl/archive/2011/03/01/checkmemoryallocationgame-sample.aspx Andy Schatz (2009 IGF winner for Monaco) talking XNA at GDC 2011 http://www.gamasutra.com/view/news/33313/GDC_2011_Andy_Schatz_Ill_Make_My_Last_Game_When_I_Die.php Xbox LIVE Indie Games (XBLIG): Clover: A Curious Tale by BinaryTweed is coming as a Deal of the Week during St. Patricks Day http://majornelson.com/archive/2011/03/03/comingsoontothexboxlivemarketplacemarchthird.aspx Ska Studios away at GDC but still very post happy as always http://www.ska-studios.com/2011/03/02/swamped-picture-pack/ http://www.ska-studios.com/2011/02/28/the-february-showcase/ http://www.ska-studios.com/2011/02/25/good-morning-gato-51-smelling-the-roses/ Just Press Start interviews Matthew Mikuszewski of Darkwind Media about Blocks Indie http://justpressstart.net/?p=516 Gamergeddon Xbox Indie Game Round Up - February 27th http://www.gamergeddon.com/2011/02/27/xbox-indie-game-round-up-february-27th/ http://www.gamergeddon.com/category/xbox-360/indie-games/ GameMarx does a round up of all the Xbox Live Indie Game podcasts that are currently available http://www.gamemarx.com/news/2011/02/27/xbox-live-indie-game-podcasts.aspx GameMarx episode 11 http://www.gamemarx.com/video/the-show/26/ep-11-february-25-2011.aspx In perhaps what I feel is the most exciting news I’ve heard all week, Michael C. Neel (ViNull of GameMarx fame) re-launch XboxIndies.com! http://www.gamemarx.com/news/2011/03/01/the-relaunch-of-xboxindies-com.aspx http://xboxindies.com/ Armless Octopus shares a little of what they heard from Luke Schneider of Radiangames during his GDC 2011 talk http://www.armlessoctopus.com/2011/03/02/gdc-2011-luke-schneider-offers-insight-into-radiangames-success/ VVGindiecast Episode 1 with guests Derek Strickland(Mr_Deeke), Kris Steele(Kriswd40 from FunInfused Games) and Dave Voyles(From armlessoctopus.com) http://vvgtv.com/2011/02/25/vvgindiecast-xblig-podcast/ If you’re doing Xbox LIVE Indie Game Reviews get in touch with XboxIndies.com to get into their aggregated feed http://forums.create.msdn.com/forums/p/76931/467189.aspx#467189 B.U.T.T.O.N and Flotilla represented XNA very well at the Independent Games Festival (are there any more games entered that were created using XNA? Stand up and be heard!) http://www.igf.com/php-bin/entry2011.php?id=374 Armless Ocotopus interview at GDC 2011 with Soulcaster creator Ian Stocker http://www.armlessoctopus.com/2011/03/04/gdc-2011-interview-with-soulcaster-creator-ian-stocker/ MommysBestGames gets a nod in the DarkBasic newsletter where it features the Explosionade Editor (just do a search for Explosionade to get to the interesting bits!) http://www.thegamecreators.com/pages/newsletters/newsletter_issue_98.html You may be hearing the cries of FortressCraft (coming soon to XBLIG) being so wrong for stealing the idea from MineCraft. But did you know the the game MineCraft started from was an XNA game called Infiniminer? XNA is getting it’s fingers into EVERYTHING! http://www.minecraftwiki.net/wiki/Infiniminer XNA Development: TorqueX is NOT dead thanks to the tremendous efforts of the XNA Community working on the CEV (special thanks to @PinoEire for all his hard work on making that happen!) http://www.garagegames.com/community/blogs/view/20878 http://torquecev.com/ Dave Henry has posted XNA 3.x adding platformer start kit to the network game state management on his new site http://twitter.com/#!/mort8088/status/43407715908853760 http://mort8088.com/2011/03/03/xna-3-x-adding-platformer-starter-kit-to-network-game-state-management/ Mark Bamford releases XNAViewer 4.0, great for running XNA games inside of a Windows Form (for building level editors, etc.) http://twitter.com/#!/xzodia04/status/43466830412660736 http://xnaviewer.codeplex.com/ Unit testing an XNA game with Resharper and NUnit http://smnbss.wordpress.com/2011/02/28/planetx-unit-testing-an-xna-game-with-resharper-and-nunit-wp7-xbox-xna/ XNA for Silverlight developers: Part 5 - Input (touch + gestures) http://ht.ly/1bxwUE Mike McLaughlin shares a link he stumbled across for those looking to understand vector and matrix math http://twitter.com/#!/mikebmcl/status/42587074725036032 http://chortle.ccsu.edu/VectorLessons/vectorIndex.html DigitalRune Resources Pooling in XNA (Part 1) http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/84/DigitalRune-Helper-Library-Resource-Pooling-in-XNA-Part-1.aspx JohnK “bobthecbuilder” released a new SunBurn Update that lowers the requirements for Windows Games http://twitter.com/#!/bobthecbuilder/status/43457306578522112 http://www.synapsegaming.com/blogs/johnk/archive/2011/03/03/sunburn-update-windows-redistributable.aspx Quick update on the Indiefreaks Game Framework v0.4 development status http://indiefreaks.com/2011/03/04/quick-update-on-igf-v0-4-development/

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

  • Oracle HRMS API – Create or Update Employee Salary

    - by PRajkumar
    API --  hr_maintain_proposal_api.cre_or_upd_salary_proposal   Note - Salary Basis is required to be assigned to employee assignment before to run Salary Proposal API Example --   DECLARE     lb_inv_next_sal_date_warning      BOOLEAN;     lb_proposed_salary_warning         BOOLEAN;     lb_approved_warning                       BOOLEAN;     lb_payroll_warning                            BOOLEAN;     ln_pay_proposal_id                           NUMBER;     ln_object_version_number                NUMBER; BEGIN    -- Create or Upadte Employee Salary Proposal    -- ----------------------------------------------------------------     hr_maintain_proposal_api.cre_or_upd_salary_proposal     (    -- Input data elements          -- ------------------------------          p_business_group_id                   => fnd_profile.value('PER_BUSINESS_GROUP_ID'),          p_assignment_id                            => 33561,          p_change_date                                => TO_DATE('13-JUN-2011'),          p_proposed_salary_n                   => 1000,          p_approved                                      => 'Y',          -- Output data elements          -- --------------------------------          p_pay_proposal_id                       => ln_pay_proposal_id,          p_object_version_number           => ln_object_version_number,            p_inv_next_sal_date_warning  => lb_inv_next_sal_date_warning,          p_proposed_salary_warning     => lb_proposed_salary_warning,          p_approved_warning                   => lb_approved_warning,          p_payroll_warning                        => lb_payroll_warning     );    COMMIT; EXCEPTION        WHEN OTHERS THEN                           ROLLBACK;                           dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • 8 Backup Tools Explained for Windows 7 and 8

    - by Chris Hoffman
    Backups on Windows can be confusing. Whether you’re using Windows 7 or 8, you have quite a few integrated backup tools to think about. Windows 8 made quite a few changes, too. You can also use third-party backup software, whether you want to back up to an external drive or back up your files to online storage. We won’t cover third-party tools here — just the ones built into Windows. Backup and Restore on Windows 7 Windows 7 has its own Backup and Restore feature that lets you create backups manually or on a schedule. You’ll find it under Backup and Restore in the Control Panel. The original version of Windows 8 still contained this tool, and named it Windows 7 File Recovery. This allowed former Windows 7 users to restore files from those old Windows 7 backups or keep using the familiar backup tool for a little while. Windows 7 File Recovery was removed in Windows 8.1. System Restore System Restore on both Windows 7 and 8 functions as a sort of automatic system backup feature. It creates backup copies of important system and program files on a schedule or when you perform certain tasks, such as installing a hardware driver. If system files become corrupted or your computer’s software becomes unstable, you can use System Restore to restore your system and program files from a System Restore point. This isn’t a way to back up your personal files. It’s more of a troubleshooting feature that uses backups to restore your system to its previous working state. Previous Versions on Windows 7 Windows 7′s Previous Versions feature allows you to restore older versions of files — or deleted files. These files can come from backups created with Windows 7′s Backup and Restore feature, but they can also come from System Restore points. When Windows 7 creates a System Restore point, it will sometimes contain your personal files. Previous Versions allows you to extract these personal files from restore points. This only applies to Windows 7. On Windows 8, System Restore won’t create backup copies of your personal files. The Previous Versions feature was removed on Windows 8. File History Windows 8 replaced Windows 7′s backup tools with File History, although this feature isn’t enabled by default. File History is designed to be a simple, easy way to create backups of your data files on an external drive or network location. File History replaces both Windows 7′s Backup and Previous Versions features. Windows System Restore won’t create copies of personal files on Windows 8. This means you can’t actually recover older versions of files until you enable File History yourself — it isn’t enabled by default. System Image Backups Windows also allows you to create system image backups. These are backup images of your entire operating system, including your system files, installed programs, and personal files. This feature was included in both Windows 7 and Windows 8, but it was hidden in the preview versions of Windows 8.1. After many user complaints, it was restored and is still available in the final version of Windows 8.1 — click System Image Backup on the File History Control Panel. Storage Space Mirroring Windows 8′s Storage Spaces feature allows you to set up RAID-like features in software. For example, you can use Storage Space to set up two hard disks of the same size in a mirroring configuration. They’ll appear as a single drive in Windows. When you write to this virtual drive, the files will be saved to both physical drives. If one drive fails, your files will still be available on the other drive. This isn’t a good long-term backup solution, but it is a way of ensuring you won’t lose important files if a single drive fails. Microsoft Account Settings Backup Windows 8 and 8.1 allow you to back up a variety of system settings — including personalization, desktop, and input settings. If you’re signing in with a Microsoft account, OneDrive settings backup is enabled automatically. This feature can be controlled under OneDrive > Sync settings in the PC settings app. This feature only backs up a few settings. It’s really more of a way to sync settings between devices. OneDrive Cloud Storage Microsoft hasn’t been talking much about File History since Windows 8 was released. That’s because they want people to use OneDrive instead. OneDrive — formerly known as SkyDrive — was added to the Windows desktop in Windows 8.1. Save your files here and they’ll be stored online tied to your Microsoft account. You can then sign in on any other computer, smartphone, tablet, or even via the web and access your files. Microsoft wants typical PC users “backing up” their files with OneDrive so they’ll be available on any device. You don’t have to worry about all these features. Just choose a backup strategy to ensure your files are safe if your computer’s hard disk fails you. Whether it’s an integrated backup tool or a third-party backup application, be sure to back up your files.

    Read the article

  • Pure Server-Side Filtering with RadGridView and WCF RIA Services

    Those of you who are familiar with WCF RIA Services know that the DomainDataSource control provides a FilterDescriptors collection that enables you to filter data returned by the query on the server. We have been using this DomainDataSource feature in our RIA Services with DomainDataSource online example for almost an year now. In the example, we are listening for RadGridViews Filtering event in order to intercept any filtering that is performed on the client and translate it to something that the DomainDataSource will understand, in this case a System.Windows.Data.FilterDescriptor being added or removed from its FilterDescriptors collection. Think of RadGridView.FilterDescriptors as client-side filtering and of DomainDataSource.FilterDescriptors as server-side filtering. We no longer need the client-side one. With the introduction of the Custom Filtering Controls feature many new possibilities have opened. With these custom controls we no longer need to do any filtering on the client. I have prepared a very small project that demonstrates how to filter solely on the server by using a custom filtering control. As I have already mentioned filtering on the server is done through the FilterDescriptors collection of the DomainDataSource control. This collection holds instances of type System.Windows.Data.FilterDescriptor. The FilterDescriptor has three important properties: PropertyPath: Specifies the name of the property that we want to filter on (the left operand). Operator: Specifies the type of comparison to use when filtering. An instance of FilterOperator Enumeration. Value: The value to compare with (the right operand). An instance of the Parameter Class. By adding filters, you can specify that only entities which meet the condition in the filter are loaded from the domain context. In case you are not familiar with these concepts you might find Brad Abrams blog interesting. Now, our requirements are to create some kind of UI that will manipulate the DomainDataSource.FilterDescriptors collection. When it comes to collections, my first choice of course would be RadGridView. If you are not familiar with the Custom Filtering Controls concept I would strongly recommend getting acquainted with my step-by-step tutorial Custom Filtering with RadGridView for Silverlight and checking the online example out. I have created a simple custom filtering control that contains a RadGridView and several buttons. This control is aware of the DomainDataSource instance, since it is operating on its FilterDescriptors collection. In fact, the RadGridView that is inside it is bound to this collection. In order to display filters that are relevant for the current column only, I have applied a filter to the grid. This filter is a Telerik.Windows.Data.FilterDescriptor and is used to filter the little grid inside the custom control. It should not be confused with the DomainDataSource.FilterDescriptors collection that RadGridView is actually bound to. These are the RIA filters. Additionally, I have added several other features. For example, if you have specified a DataFormatString on your original column, the Value column inside the custom control will pick it up and format the filter values accordingly. Also, I have transferred the data type of the column that you are filtering to the Value column of the custom control. This will help the little RadGridView determine what kind of editor to show up when you begin edit, for example a date picker for DateTime columns. Finally, I have added four buttons two of them can be used to add or remove filters and the other two will communicate the changes you have made to the server. Here is the full source code of the DomainDataSourceFilteringControl. The XAML: <UserControl x:Class="PureServerSideFiltering.DomainDataSourceFilteringControl"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:telerikGrid="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.GridView"     xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"     Width="300">     <Border x:Name="LayoutRoot"             BorderThickness="1"             BorderBrush="#FF8A929E"             Padding="5"             Background="#FFDFE2E5">           <Grid>             <Grid.RowDefinitions>                 <RowDefinition Height="Auto"/>                 <RowDefinition Height="150"/>                 <RowDefinition Height="Auto"/>             </Grid.RowDefinitions>               <StackPanel Grid.Row="0"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="addFilterButton"                                   Click="OnAddFilterButtonClick"                                   Content="Add Filter"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="removeFilterButton"                                   Click="OnRemoveFilterButtonClick"                                   Content="Remove Filter"                                   Margin="2"                                   Width="96"/>             </StackPanel>               <telerikGrid:RadGridView Name="filtersGrid"                                     Grid.Row="1"                                     Margin="2"                                     ItemsSource="{Binding FilterDescriptors}"                                     AddingNewDataItem="OnFilterGridAddingNewDataItem"                                     ColumnWidth="*"                                     ShowGroupPanel="False"                                     AutoGenerateColumns="False"                                     CanUserResizeColumns="False"                                     CanUserReorderColumns="False"                                     CanUserFreezeColumns="False"                                     RowIndicatorVisibility="Collapsed"                                     IsFilteringAllowed="False"                                     CanUserSortColumns="False">                 <telerikGrid:RadGridView.Columns>                     <telerikGrid:GridViewComboBoxColumn DataMemberBinding="{Binding Operator}"                                                         UniqueName="Operator"/>                     <telerikGrid:GridViewDataColumn Header="Value"                                                     DataMemberBinding="{Binding Value.Value}"                                                     UniqueName="Value"/>                 </telerikGrid:RadGridView.Columns>             </telerikGrid:RadGridView>               <StackPanel Grid.Row="2"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="filterButton"                                   Click="OnApplyFiltersButtonClick"                                   Content="Apply Filters"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="clearButton"                                   Click="OnClearFiltersButtonClick"                                   Content="Clear Filters"                                   Margin="2"                                   Width="96"/>             </StackPanel>           </Grid>       </Border> </UserControl>   And the code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Telerik.Windows.Controls.GridView; using System.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Data;   namespace PureServerSideFiltering {     /// <summary>     /// A custom filtering control capable of filtering purely server-side.     /// </summary>     public partial class DomainDataSourceFilteringControl : UserControl, IFilteringControl     {         // The main player here.         DomainDataSource domainDataSource;           // This is the name of the property that this column displays.         private string dataMemberName;           // This is the type of the property that this column displays.         private Type dataMemberType;           /// <summary>         /// Identifies the <see cref="IsActive"/> dependency property.         /// </summary>         /// <remarks>         /// The state of the filtering funnel (i.e. full or empty) is bound to this property.         /// </remarks>         public static readonly DependencyProperty IsActiveProperty =             DependencyProperty.Register(                 "IsActive",                 typeof(bool),                 typeof(DomainDataSourceFilteringControl),                 new PropertyMetadata(false));           /// <summary>         /// Gets or sets a value indicating whether the filtering is active.         /// </summary>         /// <remarks>         /// Set this to true if you want to lit-up the filtering funnel.         /// </remarks>         public bool IsActive         {             get { return (bool)GetValue(IsActiveProperty); }             set { SetValue(IsActiveProperty, value); }         }           /// <summary>         /// Gets or sets the domain data source.         /// We need this in order to work on its FilterDescriptors collection.         /// </summary>         /// <value>The domain data source.</value>         public DomainDataSource DomainDataSource         {             get { return this.domainDataSource; }             set { this.domainDataSource = value; }         }           public System.Windows.Data.FilterDescriptorCollection FilterDescriptors         {             get { return this.DomainDataSource.FilterDescriptors; }         }           public DomainDataSourceFilteringControl()         {             InitializeComponent();         }           public void Prepare(GridViewBoundColumnBase column)         {             this.LayoutRoot.DataContext = this;               if (this.DomainDataSource == null)             {                 // Sorry, but we need a DomainDataSource. Can't do anything without it.                 return;             }               // This is the name of the property that this column displays.             this.dataMemberName = column.GetDataMemberName();               // This is the type of the property that this column displays.             // We need this in order to see which FilterOperators to feed to the combo-box column.             this.dataMemberType = column.DataType;               // We will use our magic Type extension method to see which operators are applicable for             // this data type. You can go to the extension method body and see what it does.             ((GridViewComboBoxColumn)this.filtersGrid.Columns["Operator"]).ItemsSource                 = this.dataMemberType.ApplicableFilterOperators();               // This is very nice as well. We will tell the Value column its data type. In this way             // RadGridView will pick up the best editor according to the data type. For example,             // if the data type of the value is DateTime, you will be editing it with a DatePicker.             // Nice!             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataType = this.dataMemberType;               // Yet another nice feature. We will transfer the original DataFormatString (if any) to             // the Value column. In this way if you have specified a DataFormatString for the original             // column, you will see all filter values formatted accordingly.             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataFormatString = column.DataFormatString;               // This is important. Since our little filtersGrid will be bound to the entire collection             // of this.domainDataSource.FilterDescriptors, we need to set a Telerik filter on the             // grid so that it will display FilterDescriptor which are relevane to this column ONLY!             Telerik.Windows.Data.FilterDescriptor columnFilter = new Telerik.Windows.Data.FilterDescriptor("PropertyPath"                 , Telerik.Windows.Data.FilterOperator.IsEqualTo                 , this.dataMemberName);             this.filtersGrid.FilterDescriptors.Add(columnFilter);               // We want to listen for this in order to activate and de-activate the UI funnel.             this.filtersGrid.Items.CollectionChanged += this.OnFilterGridItemsCollectionChanged;         }           /// <summary>         // Since the DomainDataSource is a little bit picky about adding uninitialized FilterDescriptors         // to its collection, we will prepare each new instance with some default values and then         // the user can change them later. Go to the event handler to see how we do this.         /// </summary>         void OnFilterGridAddingNewDataItem(object sender, GridViewAddingNewEventArgs e)         {             // We need to initialize the new instance with some values and let the user go on from here.             System.Windows.Data.FilterDescriptor newFilter = new System.Windows.Data.FilterDescriptor();               // This is a must. It should know what member it is filtering on.             newFilter.PropertyPath = this.dataMemberName;               // Initialize it with one of the allowed operators.             // TypeExtensions.ApplicableFilterOperators method for more info.             newFilter.Operator = this.dataMemberType.ApplicableFilterOperators().First();               if (this.dataMemberType == typeof(DateTime))             {                 newFilter.Value.Value = DateTime.Now;             }             else if (this.dataMemberType == typeof(string))             {                 newFilter.Value.Value = "<enter text>";             }             else if (this.dataMemberType.IsValueType)             {                 // We need something non-null for all value types.                 newFilter.Value.Value = Activator.CreateInstance(this.dataMemberType);             }               // Let the user edit the new filter any way he/she likes.             e.NewObject = newFilter;         }           void OnFilterGridItemsCollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)         {             // We are active only if we have any filters define. In this case the filtering funnel will lit-up.             this.IsActive = this.filtersGrid.Items.Count > 0;         }           private void OnApplyFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // Since this.domainDataSource.AutoLoad is false, this will take into             // account all filtering changes that the user has made since the last             // Load() and pull the new data to the client.             this.DomainDataSource.Load();         }           private void OnClearFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // We want to remove ONLY those filters from the DomainDataSource             // that this control is responsible for.             this.DomainDataSource.FilterDescriptors                 .Where(fd => fd.PropertyPath == this.dataMemberName) // Only "our" filters.                 .ToList()                 .ForEach(fd => this.DomainDataSource.FilterDescriptors.Remove(fd)); // Bye-bye!               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // After we did our housekeeping, get the new data to the client.             this.DomainDataSource.Load();         }           private void OnAddFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Let the user enter his/or her requirements for a new filter.             this.filtersGrid.BeginInsert();             this.filtersGrid.UpdateLayout();         }           private void OnRemoveFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Find the currently selected filter and destroy it.             System.Windows.Data.FilterDescriptor filterToRemove = this.filtersGrid.SelectedItem as System.Windows.Data.FilterDescriptor;             if (filterToRemove != null                 && this.DomainDataSource.FilterDescriptors.Contains(filterToRemove))             {                 this.DomainDataSource.FilterDescriptors.Remove(filterToRemove);             }         }           private void ClosePopup()         {             System.Windows.Controls.Primitives.Popup popup = this.ParentOfType<System.Windows.Controls.Primitives.Popup>();             if (popup != null)             {                 popup.IsOpen = false;             }         }     } }   Finally, we need to tell RadGridViews Columns to use this custom control instead of the default one. Here is how to do it: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Data; using Telerik.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Controls.GridView;   namespace PureServerSideFiltering {     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.grid.AutoGeneratingColumn += this.OnGridAutoGeneratingColumn;               // Uncomment this if you want the DomainDataSource to start pre-filtered.             // You will notice how our custom filtering controls will correctly read this information,             // populate their UI with the respective filters and lit-up the funnel to indicate that             // filtering is active. Go ahead and try it.             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("Title", System.Windows.Data.FilterOperator.Contains, "Assistant"));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsGreaterThan, new DateTime(1998, 12, 31)));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsLessThanOrEqualTo, new DateTime(1999, 12, 31)));               this.employeesDataSource.Load();         }           /// <summary>         /// First of all, we will need to replace the default filtering control         /// of each column with out custom filtering control DomainDataSourceFilteringControl         /// </summary>         private void OnGridAutoGeneratingColumn(object sender, GridViewAutoGeneratingColumnEventArgs e)         {             GridViewBoundColumnBase dataColumn = e.Column as GridViewBoundColumnBase;             if (dataColumn != null)             {                 // We do not like ugly dates.                 if (dataColumn.DataType == typeof(DateTime))                 {                     dataColumn.DataFormatString = "{0:d}"; // Short date pattern.                       // Notice how this format will be later transferred to the Value column                     // of the grid that we have inside the DomainDataSourceFilteringControl.                 }                   // Replace the default filtering control with our.                 dataColumn.FilteringControl = new DomainDataSourceFilteringControl()                 {                     // Let the control know about the DDS, after all it will work directly on it.                     DomainDataSource = this.employeesDataSource                 };                   // Finally, lit-up the filtering funnel through the IsActive dependency property                 // in case there are some filters on the DDS that match our column member.                 string dataMemberName = dataColumn.GetDataMemberName();                 dataColumn.FilteringControl.IsActive =                     this.employeesDataSource.FilterDescriptors                     .Where(fd => fd.PropertyPath == dataMemberName)                     .Count() > 0;             }         }     } } The best part is that we are not only writing filters for the DomainDataSource we can read and load them. If the DomainDataSource has some pre-existing filters (like I have created in the code above), our control will read them and will populate its UI accordingly. Even the filtering funnel will light-up! Remember, the funnel is controlled by the IsActive property of our control. While this is just a basic implementation, the source code is absolutely yours and you can take it from here and extend it to match your specific business requirements. Below the main grid there is another debug grid. With its help you can monitor what filter descriptors are added and removed to the domain data source. Download Source Code. (You will have to have the AdventureWorks sample database installed on the default SQLExpress instance in order to run it.) Enjoy!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Commercial Drupal Modules & Themes

    - by Ravish
    A discussion at Drupal.org forums prompted me to give my input about commercial ecosystem around Open Source Content Management Systems. WordPress and Joomla have been growing rapidly since past few years. But, growth rate of Drupal seems to be almost flat. Despite being the most powerful CMS around, Drupal is still not being adopted by masses. Many people will argue that Drupal is not targeted towards masses, but developers. I agree, Drupal is more of a development platform than a consumer CMS. Drupal is ‘many things to many people’, and I can build almost any type of website with it. Drupal is being used for building blogs, corporate websites, Intranet portals, social networking and even a project management system. Looking at the wide array of Drupal implementations, it deserves to be the most widely adopted CMS. I believe there are few challenges that Drupal community needs to overcome. To understand these challenges, I surveyed some webmasters who use Joomla or WordPress but not Drupal. I asked them why they don’t want to use Drupal, following are the responses I got from them: Drupal is too complicated, takes time to learn. Drupal is great, but its admin panel is overwhelming. I couldn’t find any nice themes for Drupal. There is no WYSIWYG editor in Drupal. Most Drupal modules do not work out of the box. There aren’t enough modules like Ubercart which provides any out of the box functionality. I tried modules like CCK, Views and Panels. After wasting several hours struggling with them, I decided to give up on Drupal. I don’t use Drupal because of pushbutton and Garland theme. I had hard time trying to customize Garland and it messed up the whole layout. There are no premium modules and themes for Drupal. Joomla has tons of awesome themes and modules. I don’t want a million hacks like CCK, Views, Tokens, Pathauto, ImageCache and CTools just to run a simple website. Most of the complaints from users are related to the learning and development curve involved with Drupal, and the lack of ecosystem. While most of the problems will be gone in Drupal 7, ecosystem is something that needs to be built by the Drupal community. Drupal distributions are a great step forward. There are few awesome Drupal distributions available like Open Publish, Open Atrium and Drupal Commons. I predict, there will be a wave of many powerful Drupal distributions after Drupal 7 release. Many of them will be user-friendly and commercial supported. Following is my post at Drupal.org forums: Quote from: http://drupal.org/node/863776#comment-3313836 Brian Gardner (StudioPress) and Woo Themes launched premium WordPress themes in 2007, the developer community did not accept it at first. Moreover, they were not even GPL licensed. There was an outcry in WordPress community against them. Following that, most premium theme providers switched to GPL licensing. Despite controversies, users voted for premium theme and plugins by buying them. Inspired by their success, hundreds of other developers started to sell premium themes and plugins. It is now the acceptable and in fact most popular business model among WordPress community. Matt Mullenweg once told me, they would not support premium themes. If he supported, developers would no more give out free GPL themes & plugins. He pointed me towards Joomla, there were hardly any nice free themes & modules available. Now two years forward, premium products are not just accepted but embraced by the WordPress community – http://wordpress.org/extend/themes/commercial/ The quality and number of themes & modules has increased, even the free ones. This also helped to boost the adoption and ecosystem of WordPress. Today, state of Drupal is like WordPress was in 2007. There are hardly any out of the box solutions available for Drupal. Ubercart, Open Publish and Open Atrium are the only ones I can think of. Many of the popular Drupal modules are patches and hole-fillers. Thankfully, these hole-filler modules are going to be in Drupal 7 core. Drupal 7 and distributions will spawn a new array of solutions built upon Drupal. Soon, we will have more like Ubercarts and Open Atriums. If commercial solutions can help fuel this ecosystem and growth, Drupal community will accept them eventually. This debate will not stop your customers from buying your product. If your product is awesome, they will vote for you by buying your product.

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >