Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 745/1008 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • How to get IE to open JavaScript as text

    - by Pete
    I am running IE 9. Up until last week sometime, if I would put the URL of a JavaScript file in the address bar, it would show the JavaScript as text in the browser window. Now when I do that, it wants to download the JavaScript file. How can I revert it to the previous handling? This is annoying since I'm developing a web application and if I can get it to display the .js files as text in the browser, then I can refresh it to force the cache to update. Update: I've tested on several co-workers machines. For some, browsing to .js files renders them in the browser (IE 9 in all cases). In others, it asks for a download. File associations don't seem to have any effect. One co-worker we tested with IE and Chrome. IE wanted to download it, but Chrome rendered it as text. This makes me think it's an IE issue and not an OS issue.

    Read the article

  • Silent install of Japanese Language Pack in Win7

    - by Doltknuckle
    Every year, due to re-imaging, I am forced to find a way to install the Japanese language pack on a collection of 30 computers. Each year I look for a way to automate this process, and each year I am forced to do this manually. Maybe this year will be different. Has anyone had any luck with installing and configuring far east language support for windows 7 without user interaction? I have already downloaded kb972813 and have a way to get it out to the computers. What I normally do is this: Run the EXE, use the default settings. Open up language settings and create the JP keyboard. Configure the language bar settings. Copy settings to default user. Delete the local user cache. Sign the different user accounts in to make sure that the default settings are correct. This whole process takes about 10 minutes, multiply that out by 30 machines and you are looking at a 5 hour process. If I can log into all of the computers at once, I can normally cut that down to about an hour. Any ideas would be appreciated. Thanks in advance

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Slow DB Performance. Seems to be memory related.

    - by David
    I am seeing a pooorly performing web app with a SQL 2005 backend. The db is on a w2k3 machine with 4GB RAM. When I run perfmon on it I see the following. Page life expectancy is low. Consistently under 300 while the Buffer cache hit ratio is always 99% +. The target server memory is always 1618304 and the total server memory is always a number just below that. So it seems that it isn't grabbing enough of the available memory. I have AWE enabled, with the lock pages right for the SQL service account and have set a maximum of 2.25Gb... but it doesn't go near that. When I restart the SQL service the page life expectancy goes much higher, 1000+, and the total target memory starts at 0 and slowly works its way back up to the previous limit. Then it hits the limit and the page life expectancy goes back down massively to <300. So I'm guessing there is something limiting the amount of memory. Any ideas on what that would be and how I can fix it?

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Exchange 2010 UR3 - customizing OWA logon page

    - by STGdb
    I have an Exchange 2010 UR3 deployment that I need to customize the OWA logon page for. I've created a new LGNTOPL.GIF file to replace the existing one in the folder: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.158.1\themes\resources” When I bring up OWA, I still get the original “Outlook Web App” logo. I’ve searched and found a couple of other instances of LGNTOPL.GIF in the directories: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.123.3\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.146.0\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\Current\themes\resources” I’ve replaced the LGNTOPL.GIF file in each of the above directories but got the same results. I’ve tried clearing my browser cache and even using multiple browsers from multiple PC’s but the same results. I’ve even tried making my GIF file the same pixel size as the original LGNTOPL.GIF logo but still the same results. I’ve tried restarting IIS on the CAS server and restarting the server but same results. Has something changed with Exchange 2010 UR3 when trying to customize OWA? I don't see anything documented about any change to OWA customization. Thanks

    Read the article

  • Why host and vmware guests fail to get MAC of each other?

    - by Georgiy Nemtsov
    I have Windows 7 64bit host running VmWare Workstation 8 with two CentOS 6.3 guests. All guests adaptors are bridged with statically assigned ip's. Connectivity bitween host and guests was fine for many days running this setup. And today while I was working suddenly host and guests became unreachable for each other. While both host and guests could connect to internet and connect to other mashines in my networks. On guests arp -a showed for host ip address: ? (192.168.1.3) <incomlete> on eth0 On host arp -a showed for guests ip 192.168.1.19 00-00-00-00-00-00 192.168.1.20 00-00-00-00-00-00 All other arp records was OK. Deleting arp-caches didn't help neither on guests nor on host. After that I disabled and reenabled network adaptor on my Windows 7 host. And the problem was gone. arp -a now shows correct MACs on all instances. As I suppose the issue was about expiry of arp cache. For some reason host and guests couldn't get their MACs. Hope somebody knows what it is all about? I am preparing guests to work in production and don't want to face such problems in future! Also I was supprised while investigating this issue from another mashine that could connect both on guests and host. On that mashine arp -a showed same MAC for host and two guests.

    Read the article

  • System requirements for running windows 8 (basic office use) in virtualbox (ubuntu as host os)

    - by Tor Thommesen
    I want to run windows 8 as a guest os with virtualbox on some thinkpad (haven't bought one yet) running Ubuntu 12.04. Apart from virtualizing windows 8 (mostly just for use with the office suite app) my needs are very modest, I don't need much more than emacs and a browser. What I'd like to know is what kind of specs will be necessary to run windows 8 well as a vm, using the office apps. It would be a shame to waste money on overpowered hardware. Are there any official guidelines from oracle or windows on this? Would this lenovo x220, for example, be sufficiently strong? The specs below were taken from this review. Intel Core i5-2520M dual-core processor (2.5GHz, 3MB cache, 3.2GHz Turbo frequency) Windows 7 Professional (64-bit) 12.5-inch Premium HD (1366 x 768) LED Backlit Display (IPS) Intel Integrated HD Graphics 4GB DDR3 (1333MHz) 320GB Hitachi Travelstar hard drive (Z7K320) Intel Centrino Advanced-N 6205 (Taylor Peak) 2x2 AGN wireless card Intel 82579LM Gigabit Ethernet 720p High Definition webcam Fingerprint reader 6-cell battery (63Wh) and optional slice battery (65Wh) Dimensions: 12 (L) x 8.2 (W) x 0.5-1.5 (H) inches with 6-cell battery Weight: 3.5 pounds with 6-cell battery 4.875 pounds with 6-cell battery and optional external battery slice Price as configured: $1,299.00 (starting at $979.00)

    Read the article

  • Howto detect fake RAM

    - by Michael
    I just bought a virtual server which should have 2GB of RAM. Now i got a server with 4gb which looks very strange to me. I think it is just a virtual RAM. dmidecode only ouputs /dev/mem: Operation not permitted How can i check if it's a real RAM or just a virtual one? free -m outputs: total used free shared buffers cached Mem: 4093 364 3728 0 0 346 -/+ buffers/cache: 18 4074 Swap: 0 0 0 Output from cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 137: kmemsize 8922287 10194944 2145910784 2145910784 0 lockedpages 0 0 523904 523904 0 privvmpages 13387 59112 9223372036854775807 9223372036854775807 0 shmpages 769 785 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 22 54 9223372036854775807 9223372036854775807 0 physpages 93377 106010 0 1047808 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 2471 2473 9223372036854775807 9223372036854775807 0 numtcpsock 5 21 9223372036854775807 9223372036854775807 0 numflock 4 13 9223372036854775807 9223372036854775807 0 numpty 1 1 9223372036854775807 9223372036854775807 0 numsiginfo 0 39 9223372036854775807 9223372036854775807 0 tcpsndbuf 102592 381632 9223372036854775807 9223372036854775807 0 tcprcvbuf 81920 4820184 9223372036854775807 9223372036854775807 0 othersockbuf 4624 61632 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 9248 9223372036854775807 9223372036854775807 0 numothersock 39 56 9223372036854775807 9223372036854775807 0 dcachesize 4178917 4232732 1072955392 1072955392 0 numfile 378 535 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 24 24 9223372036854775807 9223372036854775807 0

    Read the article

  • Conflicts with file from package mysql-5.0.77

    - by Whiteyq
    I'm trying to install APC (Alternative PHP Cache) on a CentOs dedicated server. I've everything done apart from configuring phpize. Running :yum -y install php-devel gives me the following error file /usr/share/mysql/charsets/Index.xml from install of mysql-libs-5.1.57-1.el5.art.x86_64 conflicts with file from package mysql-5.0.77-4.el5_5.3.i386 etc etc for other languages So, i think the mysql version i have is too old & i more than likely need to upgrade mysql to version 5.1. Im reluctant to do this as a) its a live server (although only 3/4 domains) b) ive read ill read to recompile php if i upgrade To add to this i have plesk installed for managing domains & might need reinstalling/reconfiguring also. sorry for the long intro but its my first post & best to give as much info as possible, so my question is basically Is there any way i can run :yum -y install php-devel to get phpize working to complete installation of APC for the version of mysql i currently have installed? ie 5.0.77

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Building a small server farm

    - by RayQuang
    Hi, I am planning to set up a Tech startup company that will provide web application solutions. Eventually we hope to diversify into different areas such as possibly social media or other services. For now we plan on running a high demand (from 1000 to 10,000 users in the first year) website running the application. This includes a MySQL database backend, email, and development servers. My question is then, what type of server arrangement will work best, that is ti say should i have a small cluster of ultra high power machines (E.G. Top of the range Xeons, with 12GB RAM) or will it be better to have more less powerful servers load balanced? Should I go for 1 - 2 u servers rack mounted ot would it be beter for it just to be tower servers for maintainability? Finally I would also like to know what kind of Internet and router i would need, I currently have 10mbit down and barely 1 mbit up, but soon our area will have a fiber optic connection with international speeds of up to 25 mbit / sec. Thanks in advance, RayQuang UPDATE: sorry I forgot to mention it, the platform that I will be using is PHP with the APC code cache, Probably running Debian.

    Read the article

  • Nginx and 1000 WordPress Installs - Optimization

    - by GTE
    Hey, I'm trying to create a rather unusual (imo) configuration where I have: nginx php-fastcgi mysql 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a seperate subdomain. Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog. This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag. 1) Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.) 2) Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one? 3) Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated. Thanks in advance

    Read the article

  • Trouble Starting MySL Community Server on Windows 7

    - by CodeAngel
    I have installed Netbeans 7 on my Windows 7. In addition, the MySQL Community Server 5.6.12 is installed with the MSI installer on thesame 7 PC. The MySQL server is integrated with the Netbeans IDE. However , it is not possible to start or stop the MySQL server from the command prompt or the Netbeans IDE. I am only able to start or stop the server from the Windows 7 services tool. Also , it is difficult running SQL queries from the Netbeans IDE even though it shows there is connection with the MySQL server. I have added the my.ini file to the installed directory of the MySQL server , that is : C:\Program Files\MySQL\MySQL Server 5.6 below is the my.ini file : # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... port = 3306 # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Any suggestion is welcomed.

    Read the article

  • Big and reaaaally strange problem with a web server (host InMotion Hosting)

    - by altar
    Hi. I have a terrible problem that I have tried to solve since three days ago: I browse my own web site and after a while I cannot access the web site. AT ALL! I can only see a 501 error message: "Method Not Implemented. GET to / not supported. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request." Once I get that error the site is totally and permanently inaccessible in that browser!! Reboot, browser restarts, clear cache, clear all history and cookies, etc., are not working. I have reproduced it in 4 different computers. Three computers are in one city, the 4th is in another city. Two different ISP's also. One computer is Linux, the others are on Windows (XP and 2000). Browsers are FF 3 to FF3.5 and IE 8. The error is ALMOST reproducible on demand (for me at least). It appears when I browse the forum under certain circumstances. I don't know which are these circumstances, but if I browse it long enough (10 sec to 5 minutes) it eventually appears. Just to make it clear, once the error appears (while browsing the forum) then the whole web site become inaccessible, not only the forum! My host is not willing to help because they say they cannot reproduce the error. I sent screenoshots but they don't care. NEWS Reseting browser's settings from the 'Tools-Clear private data' din't worked. However! When I have cleared the same settings (more exaclty) cookies from the special menu that appears when you right click website's icon, it worked. So it was something related to a cookie BUT it manifests in all browsers (FF, IE, Opera). So it cannot be a browser related problem.

    Read the article

  • IE8 page reload hangs

    - by Rod
    When a 7mb HTM file is first opened (by clicking on the file icon or using Open With), both IE8 and Firefox display this browser file quickly. After the file is closed, Firefox will reopen this file quickly, but IE8 appears to hang during the reopen. Clearing the IE cache does not help. However, IE will reopen the file quickly again only if the File/Open/Browse feature of the menu bar is used (clicking on the file icon can be used only once between computer reboots). Testing suggests that the problem relates to the number of HTML hyperlinks pointing to another part of the file. There are many hyperlinks, but they are not a problem during the first load of the document (between computer reboots). What needs to be fixed to avoid use of the workaround? Using Windows XP SP3 Update 6/23/12 - Controlled testing shows that the number of hyperlinks is not the problem. The way this large file is opened is the difference: 1) from the IE menu bar, File/Open/Browse is consistent and fast (but not as fast as FF). 2) clicking on the file name in the folder (even when IE is the default program for this file type) causes a much delayed load of the file. Creating a smaller file demonstrates the delayed load, but verifies that the load eventually occurs.

    Read the article

  • Can't reach only certain websites from my Wifi (with macbook and iphone)

    - by mellin
    I can't access certain websites neither from my macbook nor from my iphone when connected to my Wifi. The same website can be opened from another windows computer connected to the same Wifi. This is what happens when I try to ping it: PING ilpost.it (151.1.175.113): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ... And when I try to traceroute it: host-001:~ j$ traceroute www.ilpost.it traceroute to ilpost.it (151.1.175.113), 64 hops max, 52 byte packets 1 vodafonedslrouter (192.168.1.1) 2.965 ms 0.743 ms 0.745 ms 2 * 2.96.54.77.rev.vodafone.pt (77.54.96.2) 12.076 ms 10.871 ms 3 77.41.30.213.rev.vodafone.pt (213.30.41.77) 14.145 ms 10.693 ms 11.960 ms 4 85.205.11.49 (85.205.11.49) 9.658 ms 8.946 ms 9.085 ms 5 85.205.13.105 (85.205.13.105) 57.497 ms 57.621 ms 48.080 ms 6 188.111.129.17 (188.111.129.17) 49.483 ms 51.338 ms 48.852 ms 7 85.205.25.174 (85.205.25.174) 47.891 ms 49.219 ms 47.821 ms 8 * * * 9 * * * 10 * * * 11 * * * I've flushed my DNS cache but nothing changed. This is quite dramatic as it seems to depend on 85.205.25.174 hop and don't know how to avoid it. Any suggestions? I add that 3 days ago everything worked fine. Then it has stopped.

    Read the article

  • Can't reach only certain websites from my Wifi (with macbook and iphone)

    - by trampj
    I can't access certain websites neither from my macbook nor from my iphone when connected to my Wifi. The same website can be opened from another windows computer connected to the same Wifi. This is what happens when I try to ping it: PING ilpost.it (151.1.175.113): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ... And when I try to traceroute it: host-001:~ j$ traceroute www.ilpost.it traceroute to ilpost.it (151.1.175.113), 64 hops max, 52 byte packets 1 vodafonedslrouter (192.168.1.1) 2.965 ms 0.743 ms 0.745 ms 2 * 2.96.54.77.rev.vodafone.pt (77.54.96.2) 12.076 ms 10.871 ms 3 77.41.30.213.rev.vodafone.pt (213.30.41.77) 14.145 ms 10.693 ms 11.960 ms 4 85.205.11.49 (85.205.11.49) 9.658 ms 8.946 ms 9.085 ms 5 85.205.13.105 (85.205.13.105) 57.497 ms 57.621 ms 48.080 ms 6 188.111.129.17 (188.111.129.17) 49.483 ms 51.338 ms 48.852 ms 7 85.205.25.174 (85.205.25.174) 47.891 ms 49.219 ms 47.821 ms 8 * * * 9 * * * 10 * * * 11 * * * I've flushed my DNS cache but nothing changed. This is quite dramatic as it seems to depend on 85.205.25.174 hop and don't know how to avoid it. Any suggestions? I add that 3 days ago everything worked fine. Then it has stopped.

    Read the article

  • Building applications with WPF, MVVM and Prism(aka CAG)

    - by skjagini
    In this article I am going to walk through an application using WPF and Prism (aka composite application guidance, CAG) which simulates engaging a taxi (cab).  The rules are simple, the app would have3 screens A login screen to authenticate the user An information screen. A screen to engage the cab and roam around and calculating the total fare Metered Rate of Fare The meter is required to be engaged when a cab is occupied by anyone $3.00 upon entry $0.35 for each additional unit The unit fare is: one-fifth of a mile, when the cab is traveling at 6 miles an hour or more; or 60 seconds when not in motion or traveling at less than 12 miles per hour. Night surcharge of $.50 after 8:00 PM & before 6:00 AM Peak hour Weekday Surcharge of $1.00 Monday - Friday after 4:00 PM & before 8:00 PM New York State Tax Surcharge of $.50 per ride. Example: Friday (2010-10-08) 5:30pm Start at Lexington Ave & E 57th St End at Irving Pl & E 15th St Start = $3.00 Travels 2 miles at less than 6 mph for 15 minutes = $3.50 Travels at more than 12 mph for 5 minutes = $1.75 Peak hour Weekday Surcharge = $1.00 (ride started at 5:30 pm) New York State Tax Surcharge = $0.50 Before we dive into the app, I would like to give brief description about the framework.  If you want to jump on to the source code, scroll all the way to the end of the post. MVVM MVVM pattern is in no way related to the usage of PRISM in your application and should be considered if you are using WPF irrespective of PRISM or not. Lets say you are not familiar with MVVM, your typical UI would involve adding some UI controls like text boxes, a button, double clicking on the button,  generating event handler, calling a method from business layer and updating the user interface, it works most of the time for developing small scale applications. The problem with this approach is that there is some amount of code specific to business logic wrapped in UI specific code which is hard to unit test it, mock it and MVVM helps to solve the exact problem. MVVM stands for Model(M) – View(V) – ViewModel(VM),  based on the interactions with in the three parties it should be called VVMM,  MVVM sounds more like MVC (Model-View-Controller) so the name. Why it should be called VVMM: View – View Model - Model WPF allows to create user interfaces using XAML and MVVM takes it to the next level by allowing complete separation of user interface and business logic. In WPF each view will have a property, DataContext when set to an instance of a class (which happens to be your view model) provides the data the view is interested in, i.e., view interacts with view model and at the same time view model interacts with view through DataContext. Sujith, if view and view model are interacting directly with each other how does MVVM is helping me separation of concerns? Well, the catch is DataContext is of type Object, since it is of type object view doesn’t know exact type of view model allowing views and views models to be loosely coupled. View models aggregate data from models (data access layer, services, etc) and make it available for views through properties, methods etc, i.e., View Models interact with Models. PRISM Prism is provided by Microsoft Patterns and Practices team and it can be downloaded from codeplex for source code,  samples and documentation on msdn.  The name composite implies, to compose user interface from different modules (views) without direct dependencies on each other, again allowing  loosely coupled development. Well Sujith, I can already do that with user controls, why shall I learn another framework?  That’s correct, you can decouple using user controls, but you still have to manage some amount of coupling, like how to do you communicate between the controls, how do you subscribe/unsubscribe, loading/unloading views dynamically. Prism is not a replacement for user controls, provides the following features which greatly help in designing the composite applications. Dependency Injection (DI)/ Inversion of Control (IoC) Modules Regions Event Aggregator  Commands Simply put, MVVM helps building a single view and Prism helps building an application using the views There are other open source alternatives to Prism, like MVVMLight, Cinch, take a look at them as well. Lets dig into the source code.  1. Solution The solution is made of the following projects Framework: Holds the common functionality in building applications using WPF and Prism TaxiClient: Start up project, boot strapping and app styling TaxiCommon: Helps with the business logic TaxiModules: Holds the meat of the application with views and view models TaxiTests: To test the application 2. DI / IoC Dependency Injection (DI) as the name implies refers to injecting dependencies and Inversion of Control (IoC) means the calling code has no direct control on the dependencies, opposite of normal way of programming where dependencies are passed by caller, i.e inversion; aside from some differences in terminology the concept is same in both the cases. The idea behind DI/IoC pattern is to reduce the amount of direct coupling between different components of the application, the higher the dependency the more tightly coupled the application resulting in code which is hard to modify, unit test and mock.  Initializing Dependency Injection through BootStrapper TaxiClient is the starting project of the solution and App (App.xaml)  is the starting class that gets called when you run the application. From the App’s OnStartup method we will invoke BootStrapper.   namespace TaxiClient { /// <summary> /// Interaction logic for App.xaml /// </summary> public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e);   (new BootStrapper()).Run(); } } } BootStrapper is your contact point for initializing the application including dependency injection, creating Shell and other frameworks. We are going to use Unity for DI and there are lot of open source DI frameworks like Spring.Net, StructureMap etc with different feature set  and you can choose a framework based on your preferences. Note that Prism comes with in built support for Unity, for example we are deriving from UnityBootStrapper in our case and for any other DI framework you have to extend the Prism appropriately   namespace TaxiClient { public class BootStrapper: UnityBootstrapper { protected override IModuleCatalog CreateModuleCatalog() { return new ConfigurationModuleCatalog(); } protected override DependencyObject CreateShell() { Framework.FrameworkBootStrapper.Run(Container, Application.Current.Dispatcher);   Shell shell = new Shell(); shell.ResizeMode = ResizeMode.NoResize; shell.Show();   return shell; } } } Lets take a look into  FrameworkBootStrapper to check out how to register with unity container. namespace Framework { public class FrameworkBootStrapper { public static void Run(IUnityContainer container, Dispatcher dispatcher) { UIDispatcher uiDispatcher = new UIDispatcher(dispatcher); container.RegisterInstance<IDispatcherService>(uiDispatcher);   container.RegisterType<IInjectSingleViewService, InjectSingleViewService>( new ContainerControlledLifetimeManager());   . . . } } } In the above code we are registering two components with unity container. You shall observe that we are following two different approaches, RegisterInstance and RegisterType.  With RegisterInstance we are registering an existing instance and the same instance will be returned for every request made for IDispatcherService   and with RegisterType we are requesting unity container to create an instance for us when required, i.e., when I request for an instance for IInjectSingleViewService, unity will create/return an instance of InjectSingleViewService class and with RegisterType we can configure the life time of the instance being created. With ContaienrControllerLifetimeManager, the unity container caches the instance and reuses for any subsequent requests, without recreating a new instance. Lets take a look into FareViewModel.cs and it’s constructor. The constructor takes one parameter IEventAggregator and if you try to find all references in your solution for IEventAggregator, you will not find a single location where an instance of EventAggregator is passed directly to the constructor. The compiler still finds an instance and works fine because Prism is already configured when used with Unity container to return an instance of EventAggregator when requested for IEventAggregator and in this particular case it is called constructor injection. public class FareViewModel:ObservableBase, IDataErrorInfo { ... private IEventAggregator _eventAggregator;   public FareViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; InitializePropertyNames(); InitializeModel(); PropertyChanged += OnPropertyChanged; } ... 3. Shell Shells are very similar in operation to Master Pages in asp.net or MDI in Windows Forms. And shells contain regions which display the views, you can have as many regions as you wish in a given view. You can also nest regions. i.e, one region can load a view which in itself may contain other regions. We have to create a shell at the start of the application and are doing it by overriding CreateShell method from BootStrapper From the following Shell.xaml you shall notice that we have two content controls with Region names as ‘MenuRegion’ and ‘MainRegion’.  The idea here is that you can inject any user controls into the regions dynamically, i.e., a Menu User Control for MenuRegion and based on the user action you can load appropriate view into MainRegion.    <Window x:Class="TaxiClient.Shell" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Regions="clr-namespace:Microsoft.Practices.Prism.Regions;assembly=Microsoft.Practices.Prism" Title="Taxi" Height="370" Width="800"> <Grid Margin="2"> <ContentControl Regions:RegionManager.RegionName="MenuRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" />   <ContentControl Grid.Row="1" Regions:RegionManager.RegionName="MainRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" /> <!--<Border Grid.ColumnSpan="2" BorderThickness="2" CornerRadius="3" BorderBrush="LightBlue" />-->   </Grid> </Window> 4. Modules Prism provides the ability to build composite applications and modules play an important role in it. For example if you are building a Mortgage Loan Processor application with 3 components, i.e. customer’s credit history,  existing mortgages, new home/loan information; and consider that the customer’s credit history component involves gathering data about his/her address, background information, job details etc. The idea here using Prism modules is to separate the implementation of these 3 components into their own visual studio projects allowing to build components with no dependency on each other and independently. If we need to add another component to the application, the component can be developed by in house team or some other team in the organization by starting with a new Visual Studio project and adding to the solution at the run time with very little knowledge about the application. Prism modules are defined by implementing the IModule interface and each visual studio project to be considered as a module should implement the IModule interface.  From the BootStrapper.cs you shall observe that we are overriding the method by returning a ConfiguratingModuleCatalog which returns the modules that are registered for the application using the app.config file  and you can also add module using code. Lets take a look into configuration file.   <?xml version="1.0"?> <configuration> <configSections> <section name="modules" type="Microsoft.Practices.Prism.Modularity.ModulesConfigurationSection, Microsoft.Practices.Prism"/> </configSections> <modules> <module assemblyFile="TaxiModules.dll" moduleType="TaxiModules.ModuleInitializer, TaxiModules" moduleName="TaxiModules"/> </modules> </configuration> Here we are adding TaxiModules project to our solution and TaxiModules.ModuleInitializer implements IModule interface   5. Module Mapper With Prism modules you can dynamically add or remove modules from the regions, apart from that Prism also provides API to control adding/removing the views from a region within the same module. Taxi Information Screen: Engage the Taxi Screen: The sample application has two screens, ‘Taxi Information’ and ‘Engage the Taxi’ and they both reside in same module, TaxiModules. ‘Engage the Taxi’ is again made of two user controls, FareView on the left and TotalView on the right. We have created a Shell with two regions, MenuRegion and MainRegion with menu loaded into MenuRegion. We can create a wrapper user control called EngageTheTaxi made of FareView and TotalView and load either TaxiInfo or EngageTheTaxi into MainRegion based on the user action. Though it will work it tightly binds the user controls and for every combination of user controls, we need to create a dummy wrapper control to contain them. Instead we can apply the principles we learned so far from Shell/regions and introduce another template (LeftAndRightRegionView.xaml) made of two regions Region1 (left) and Region2 (right) and load  FareView and TotalView dynamically.  To help with loading of the views dynamically I have introduce an helper an interface, IInjectSingleViewService,  idea suggested by Mike Taulty, a must read blog for .Net developers. using System; using System.Collections.Generic; using System.ComponentModel;   namespace Framework.PresentationUtility.Navigation {   public interface IInjectSingleViewService : INotifyPropertyChanged { IEnumerable<CommandViewDefinition> Commands { get; } IEnumerable<ModuleViewDefinition> Modules { get; }   void RegisterViewForRegion(string commandName, string viewName, string regionName, Type viewType); void ClearViewFromRegion(string viewName, string regionName); void RegisterModule(string moduleName, IList<ModuleMapper> moduleMappers); } } The Interface declares three methods to work with views: RegisterViewForRegion: Registers a view with a particular region. You can register multiple views and their regions under one command.  When this particular command is invoked all the views registered under it will be loaded into their regions. ClearViewFromRegion: To unload a specific view from a region. RegisterModule: The idea is when a command is invoked you can load the UI with set of controls in their default position and based on the user interaction, you can load different contols in to different regions on the fly.  And it is supported ModuleViewDefinition and ModuleMappers as shown below. namespace Framework.PresentationUtility.Navigation { public class ModuleViewDefinition { public string ModuleName { get; set; } public IList<ModuleMapper> ModuleMappers; public ICommand Command { get; set; } }   public class ModuleMapper { public string ViewName { get; set; } public string RegionName { get; set; } public Type ViewType { get; set; } } } 6. Event Aggregator Prism event aggregator enables messaging between components as in Observable pattern, Notifier notifies the Observer which receives notification it is interested in. When it comes to Observable pattern, Observer has to unsubscribes for notifications when it no longer interested in notifications, which allows the Notifier to remove the Observer’s reference from it’s local cache. Though .Net has managed garbage collection it cannot remove inactive the instances referenced by an active instance resulting in memory leak, keeping the Observers in memory as long as Notifier stays in memory.  Developers have to be very careful to unsubscribe when necessary and it often gets overlooked, to overcome these problems Prism Event Aggregator uses weak references to cache the reference (Observer in this case)  and releases the reference (memory) once the instance goes out of scope. Using event aggregator is very simple, declare a generic type of CompositePresenationEvent by inheriting from it. using Microsoft.Practices.Prism.Events; using TaxiCommon.BAO;   namespace TaxiCommon.CompositeEvents { public class TaxiOnMoveEvent:CompositePresentationEvent<TaxiOnMove> { } }   TaxiOnMove.cs includes the properties which we want to exchange between the parties, FareView and TotalView. using System;   namespace TaxiCommon.BAO { public class TaxiOnMove { public TimeSpan MinutesAtTweleveMPH { get; set; } public double MilesAtSixMPH { get; set; } } }   Lets take a look into FareViewodel (Notifier) and how it raises the event.  Here we are raising the event by getting the event through GetEvent<..>() and publishing it with the payload private void OnAddMinutes(object obj) { TaxiOnMove payload = new TaxiOnMove(); if(MilesAtSixMPH != null) payload.MilesAtSixMPH = MilesAtSixMPH.Value; if(MinutesAtTweleveMPH != null) payload.MinutesAtTweleveMPH = new TimeSpan(0,0,MinutesAtTweleveMPH.Value,0);   _eventAggregator.GetEvent<TaxiOnMoveEvent>().Publish(payload); ResetMinutesAndMiles(); } And TotalViewModel(Observer) subscribes to notifications by getting the event through GetEvent<..>() namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { .... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; ... }   private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>() .Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>() .Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>() .Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   ... private void OnTaxiMove(TaxiOnMove taxiOnMove) { OnMoveFare fare = new OnMoveFare(taxiOnMove); Fares.Add(fare); SetTotalFare(new []{fare}); }   .... 7. MVVM through example In this section we are going to look into MVVM implementation through example.  I have all the modules declared in a single project, TaxiModules, again it is not necessary to have them into one project. Once the user logs into the application, will be greeted with the ‘Engage the Taxi’ screen which is made of two user controls, FareView.xaml and TotalView.Xaml. As you can see from the solution explorer, each of them have their own code behind files and  ViewModel classes, FareViewMode.cs, TotalViewModel.cs Lets take a look in to the FareView and how it interacts with FareViewModel using MVVM implementation. FareView.xaml acts as a view and FareViewMode.cs is it’s view model. The FareView code behind class   namespace TaxiModules.Views { /// <summary> /// Interaction logic for FareView.xaml /// </summary> public partial class FareView : UserControl { public FareView(FareViewModel viewModel) { InitializeComponent(); this.Loaded += (s, e) => { this.DataContext = viewModel; }; } } } The FareView is bound to FareViewModel through the data context  and you shall observe that DataContext is of type Object, i.e. the FareView doesn’t really know the type of ViewModel (FareViewModel). This helps separation of View and ViewModel as View and ViewModel are independent of each other, you can bind FareView to FareViewModel2 as well and the application compiles just fine. Lets take a look into FareView xaml file  <UserControl x:Class="TaxiModules.Views.FareView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Toolkit="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" xmlns:Commands="clr-namespace:Microsoft.Practices.Prism.Commands;assembly=Microsoft.Practices.Prism"> <Grid Margin="10" > ....   <Border Style="{DynamicResource innerBorder}" Grid.Row="0" Grid.Column="0" Grid.RowSpan="11" Grid.ColumnSpan="2" Panel.ZIndex="1"/>   <Label Grid.Row="0" Content="Engage the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="1" Content="Select the State"/> <ComboBox Grid.Row="1" Grid.Column="1" ItemsSource="{Binding States}" Height="auto"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"/> </DataTemplate> </ComboBox.ItemTemplate> <ComboBox.SelectedItem> <Binding Path="SelectedState" Mode="TwoWay"/> </ComboBox.SelectedItem> </ComboBox> <Label Grid.Row="2" Content="Select the Date of Entry"/> <Toolkit:DatePicker Grid.Row="2" Grid.Column="1" SelectedDate="{Binding DateOfEntry, ValidatesOnDataErrors=true}" /> <Label Grid.Row="3" Content="Enter time 24hr format"/> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding TimeOfEntry, TargetNullValue=''}"/> <Button Grid.Row="4" Grid.Column="1" Content="Start the Meter" Commands:Click.Command="{Binding StartMeterCommand}" />   <Label Grid.Row="5" Content="Run the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="6" Content="Number of Miles &lt;@6mph"/> <TextBox Grid.Row="6" Grid.Column="1" Text="{Binding MilesAtSixMPH, TargetNullValue='', ValidatesOnDataErrors=true}"/> <Label Grid.Row="7" Content="Number of Minutes @12mph"/> <TextBox Grid.Row="7" Grid.Column="1" Text="{Binding MinutesAtTweleveMPH, TargetNullValue=''}"/> <Button Grid.Row="8" Grid.Column="1" Content="Add Minutes and Miles " Commands:Click.Command="{Binding AddMinutesCommand}"/> <Label Grid.Row="9" Content="Other Operations" Style="{DynamicResource innerHeader}"/> <Button Grid.Row="10" Grid.Column="1" Content="Reset the Meter" Commands:Click.Command="{Binding ResetCommand}"/>   </Grid> </UserControl> The highlighted code from the above code shows data binding, for example ComboBox which displays list of states has it’s ItemsSource bound to States property, with DataTemplate bound to Name and SelectedItem  to SelectedState. You might be wondering what are all these properties and how it is able to bind to them.  The answer lies in data context, i.e., when you bound a control, WPF looks for data context on the root object (Grid in this case) and if it can’t find data context it will look into root’s root, i.e. FareView UserControl and it is bound to FareViewModel.  Each of those properties have be declared on the ViewModel for the View to bind correctly. To put simply, View is bound to ViewModel through data context of type object and every control that is bound on the View actually binds to the public property on the ViewModel. Lets look into the ViewModel code (the following code is not an exact copy of FareViewMode.cs, pasted relevant code for this section)   namespace TaxiModules.ViewModels { public class FareViewModel:ObservableBase, IDataErrorInfo { public List<USState> States { get { return USStates.StateList; } }   public USState SelectedState { get { return _selectedState; } set { _selectedState = value; RaisePropertyChanged(_selectedStatePropertyName); } }   public DateTime? DateOfEntry { get { return _dateOfEntry; } set { _dateOfEntry = value; RaisePropertyChanged(_dateOfEntryPropertyName); } }   public TimeSpan? TimeOfEntry { get { return _timeOfEntry; } set { _timeOfEntry = value; RaisePropertyChanged(_timeOfEntryPropertyName); } }   public double? MilesAtSixMPH { get { return _milesAtSixMPH; } set { _milesAtSixMPH = value; RaisePropertyChanged(_distanceAtSixMPHPropertyName); } }   public int? MinutesAtTweleveMPH { get { return _minutesAtTweleveMPH; } set { _minutesAtTweleveMPH = value; RaisePropertyChanged(_minutesAtTweleveMPHPropertyName); } }   public ICommand StartMeterCommand { get { if(_startMeterCommand == null) { _startMeterCommand = new DelegateCommand<object>(OnStartMeter, CanStartMeter); } return _startMeterCommand; } }   public ICommand AddMinutesCommand { get { if(_addMinutesCommand == null) { _addMinutesCommand = new DelegateCommand<object>(OnAddMinutes, CanAddMinutes); } return _addMinutesCommand; } }   public ICommand ResetCommand { get { if(_resetCommand == null) { _resetCommand = new DelegateCommand<object>(OnResetCommand); } return _resetCommand; } }   } private void OnStartMeter(object obj) { _eventAggregator.GetEvent<TaxiStartedEvent>().Publish( new TaxiStarted() { EngagedOn = DateOfEntry.Value.Date + TimeOfEntry.Value, EngagedState = SelectedState.Value });   _isMeterStarted = true; OnPropertyChanged(this,null); } And views communicate user actions like button clicks, tree view item selections, etc using commands. When user clicks on ‘Start the Meter’ button it invokes the method StartMeterCommand, which calls the method OnStartMeter which publishes the event to TotalViewModel using event aggregator  and TaxiStartedEvent. namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { ... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator;   InitializePropertyNames(); InitializeModel(); SubscribeToEvents(); }   public decimal? TotalFare { get { return _totalFare; } set { _totalFare = value; RaisePropertyChanged(_totalFarePropertyName); } } .... private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>().Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>().Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>().Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   private void OnTaxiStarted(TaxiStarted taxiStarted) { Fares.Add(new EntryFare()); Fares.Add(new StateTaxFare(taxiStarted)); Fares.Add(new NightSurchargeFare(taxiStarted)); Fares.Add(new PeakHourWeekdayFare(taxiStarted));   SetTotalFare(Fares); }   private void SetTotalFare(IEnumerable<IFare> fares) { TotalFare = (_totalFare ?? 0) + TaxiFareHelper.GetTotalFare(fares); } ....   } }   TotalViewModel subscribes to events, TaxiStartedEvent and rest. When TaxiStartedEvent gets invoked it calls the OnTaxiStarted method which sets the total fare which includes entry fee, state tax, nightly surcharge, peak hour weekday fare.   Note that TotalViewModel derives from ObservableBase which implements the method RaisePropertyChanged which we are invoking in Set of TotalFare property, i.e, once we update the TotalFare property it raises an the event that  allows the TotalFare text box to fetch the new value through the data context. ViewModel is communicating with View through data context and it has no knowledge about View, helping in loose coupling of ViewModel and View.   I have attached the source code (.Net 4.0, Prism 4.0, VS 2010) , download and play with it and don’t forget to leave your comments.  

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Visual Studio 2010 and .NET 4 Released

    - by ScottGu
    The final release of Visual Studio 2010 and .NET 4 is now available. Download and Install Today MSDN subscribers, as well as WebsiteSpark/BizSpark/DreamSpark members, can now download the final releases of Visual Studio 2010 and TFS 2010 through the MSDN subscribers download center.  If you are not an MSDN Subscriber, you can download free 90-day trial editions of Visual Studio 2010.  Or you can can download the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out).  If you are looking for an easy way to setup a new machine for web-development you can automate installing ASP.NET 4, ASP.NET MVC 2, IIS, SQL Server Express and Visual Web Developer 2010 Express really quickly with the Microsoft Web Platform Installer (just click the install button on the page). What is new with VS 2010 and .NET 4 Today’s release is a big one – and brings with it a ton of new feature and capabilities. One of the things we tried hard to focus on with this release was to invest heavily in making existing applications, projects and developer experiences better.  What this means is that you don’t need to read 1000+ page books or spend time learning major new concepts in order to take advantage of the release.  There are literally thousands of improvements (both big and small) that make you more productive and successful without having to learn big new concepts in order to start using them.  Below is just a small sampling of some of the improvements with this release: Visual Studio 2010 IDE  Visual Studio 2010 now supports multiple-monitors (enabling much better use of screen real-estate).  It has new code Intellisense support that makes it easier to find and use classes and methods. It has improved code navigation support for searching code-bases and seeing how code is called and used.  It has new code visualization support that allows you to see the relationships across projects and classes within projects, as well as to automatically generate sequence diagrams to chart execution flow.  The editor now supports HTML and JavaScript snippet support as well as improved JavaScript intellisense. The VS 2010 Debugger and Profiling support is now much, much richer and enables new features like Intellitrace (aka Historical Debugging), debugging of Crash/Dump files, and better parallel debugging.  VS 2010’s multi-targeting support is now much richer, and enables you to use VS 2010 to target .NET 2, .NET 3, .NET 3.5 and .NET 4 applications.  And the infamous Add Reference dialog now loads much faster. TFS 2010 is now easy to setup (you can now install the server in under 10 minutes) and enables great source-control, bug/work-item tracking, and continuous integration support.  Testing (both automated and manual) is now much, much richer.  And VS 2010 Premium and Ultimate provide much richer architecture and design tooling support. VB and C# Language Features VB and C# in VS 2010 both contain a bunch of new features and capabilities.  VB adds new support for automatic properties, collection initializers, and implicit line continuation support among many other features.  C# adds support for optional parameters and named arguments, a new dynamic keyword, co-variance and contra-variance, and among many other features. ASP.NET 4 and ASP.NET MVC 2 With ASP.NET 4, Web Forms controls now render clean, semantically correct, and CSS friendly HTML markup. Built-in URL routing functionality allows you to expose clean, search engine friendly, URLs and increase the traffic to your Website.  ViewState within applications can now be more easily controlled and made smaller.  ASP.NET Dynamic Data support has been expanded.  More controls, including rich charting and data controls, are now built-into ASP.NET 4 and enable you to build applications even faster.  New starter project templates now make it easier to get going with new projects.  SEO enhancements make it easier to drive traffic to your public facing sites.  And web.config files are now clean and simple. ASP.NET MVC 2 is now built-into VS 2010 and ASP.NET 4, and provides a great way to build web sites and applications using a model-view-controller based pattern. ASP.NET MVC 2 adds features to easily enable client and server validation logic, provides new strongly-typed HTML and UI-scaffolding helper methods.  It also enables more modular/reusable applications.  The new <%: %> syntax in ASP.NET makes it easier to HTML encode output.  Visual Studio 2010 also now includes better tooling support for unit testing and TDD.  In particular, “Consume first intellisense” and “generate from usage" support within VS 2010 make it easier to write your unit tests first, and then drive your implementation from them. Deploying ASP.NET applications gets a lot easier with this release. You can now publish your Websites and applications to a staging or production server from within Visual Studio itself. Visual Studio 2010 makes it easy to transfer all your files, code, configuration, database schema and data in one complete package. VS 2010 also makes it easy to manage separate web.config configuration files settings depending upon whether you are in debug, release, staging or production modes. WPF 4 and Silverlight 4 WPF 4 includes a ton of new improvements and capabilities including more built-in controls, richer graphics features (cached composition, pixel shader 3 support, layoutrounding, and animation easing functions), a much improved text stack (with crisper text rendering, custom dictionary support, and selection and caret brush options).  WPF 4 also includes a bunch of support to enable you to take advantage of new Windows 7 features – including multi-touch and Windows 7 shell integration. Silverlight 4 will launch this week as well.  You can watch my Silverlight 4 launch keynote streamed live Tuesday (April 13th) at 8am Pacific Time.  Silverlight 4 includes a ton of new capabilities – including a bunch for making it possible to build great business applications and out of the browser applications.  I’ll be doing a separate blog post later this week (once it is live on the web) that talks more about its capabilities. Visual Studio 2010 now includes great tooling support for both WPF and Silverlight.  The new VS 2010 WPF and Silverlight designer makes it much easier to build client applications as well as build great line of business solutions, as well as integrate and bind with data.  Tooling support for Silverlight 4 with the final release of Visual Studio 2010 will be available when Silverlight 4 releases to the web this week. SharePoint and Azure Visual Studio 2010 now includes built-in support for building SharePoint applications.  You can now create, edit, build, and debug SharePoint applications directly within Visual Studio 2010.  You can also now use SharePoint with TFS 2010. Support for creating Azure-hosted applications is also now included with VS 2010 – allowing you to build ASP.NET and WCF based applications and host them within the cloud. Data Access Data access has a lot of improvements coming to it with .NET 4.  Entity Framework 4 includes a ton of new features and capabilities – including support for model first and POCO development, default support for lazy loading, built-in support for pluralization/singularization of table/property names within the VS 2010 designer, full support for all the LINQ operators, the ability to optionally expose foreign keys on model objects (useful for some stateless web scenarios), disconnected API support to better handle N-Tier and stateless web scenarios, and T4 template customization support within VS 2010 to allow you to customize and automate how code is generated for you by the data designer.  In addition to improvements with the Entity Framework, LINQ to SQL with .NET 4 also includes a bunch of nice improvements.  WCF and Workflow WCF includes a bunch of great new capabilities – including better REST, activation and configuration support.  WCF Data Services (formerly known as Astoria) and WCF RIA Services also now enable you to easily expose and work with data from remote clients. Windows Workflow is now much faster, includes flowchart services, and now makes it easier to make custom services than before.  More details can be found here. CLR and Core .NET Library Improvements .NET 4 includes the new CLR 4 engine – which includes a lot of nice performance and feature improvements.  CLR 4 engine now runs side-by-side in-process with older versions of the CLR – allowing you to use two different versions of .NET within the same process.  It also includes improved COM interop support.  The .NET 4 base class libraries (BCL) include a bunch of nice additions and refinements.  In particular, the .NET 4 BCL now includes new parallel programming support that makes it much easier to build applications that take advantage of multiple CPUs and cores on a computer.  This work dove-tails nicely with the new VS 2010 parallel debugger (making it much easier to debug parallel applications), as well as the new F# functional language support now included in the VS 2010 IDE.  .NET 4 also now also has the Dynamic Language Runtime (DLR) library built-in – which makes it easier to use dynamic language functionality with .NET.  MEF – a really cool library that enables rich extensibility – is also now built-into .NET 4 and included as part of the base class libraries.  .NET 4 Client Profile The download size of the .NET 4 redist is now much smaller than it was before (the x86 full .NET 4 package is about 36MB).  We also now have a .NET 4 Client Profile package which is a pure sub-set of the full .NET that can be used to streamline client application installs. C++ VS 2010 includes a bunch of great improvements for C++ development.  This includes better C++ Intellisense support, MSBuild support for projects, improved parallel debugging and profiler support, MFC improvements, and a number of language features and compiler optimizations. My VS 2010 and .NET 4 Blog Series I’ve been cranking away on a blog series the last few months that highlights many of the new VS 2010 and .NET 4 improvements.  The good news is that I have about 20 in-depth posts already written.  The bad news (for me) is that I have about 200 more to go until I’m done!  I’m going to try and keep adding a few more each week over the next few months to discuss the new improvements and how best to take advantage of them. Below is a list of the already written ones that you can check out today: Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 Stay tuned to my blog as I post more.  Also check out this page which links to a bunch of great articles and videos done by others. VS 2010 Installation Notes If you have installed a previous version of VS 2010 on your machine (either the beta or the RC) you must first uninstall it before installing the final VS 2010 release.  I also recommend uninstalling .NET 4 betas (including both the client and full .NET 4 installs) as well as the other installs that come with VS 2010 (e.g. ASP.NET MVC 2 preview builds, etc).  The uninstalls of the betas/RCs will clean up all the old state on your machine – after which you can install the final VS 2010 version and should have everything just work (this is what I’ve done on all of my machines and I haven’t had any problems). The VS 2010 and .NET 4 installs add a bunch of new managed assemblies to your machine.  Some of these will be “NGEN’d” to native code during the actual install process (making them run fast).  To avoid adding too much time to VS setup, though, we don’t NGEN all assemblies immediately – and instead will NGEN the rest in the background when your machine is idle.  Until it finishes NGENing the assemblies they will be JIT’d to native code the first time they are used in a process – which for large assemblies can sometimes cause a slight performance hit. If you run into this you can manually force all assemblies to be NGEN’d to native code immediately (and not just wait till the machine is idle) by launching the Visual Studio command line prompt from the Windows Start Menu (Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt).  Within the command prompt type “Ngen executequeueditems” – this will cause everything to be NGEN’d immediately. How to Buy Visual Studio 2010 You can can download and use the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out). You can buy a new copy of VS 2010 Professional that includes a 1 year subscription to MSDN Essentials for $799.  MSDN Essentials includes a developer license of Windows 7 Ultimate, Windows Server 2008 R2 Enterprise, SQL Server 2008 DataCenter R2, and 20 hours of Azure hosting time.  Subscribers also have access to MSDN’s Online Concierge, and Priority Support in MSDN Forums. Upgrade prices from previous releases of Visual Studio are also available.  Existing Visual Studio 2005/2008 Standard customers can upgrade to Visual Studio 2010 Professional for a special $299 retail price until October.  You can take advantage of this VS Standard->Professional upgrade promotion here. Web developers who build applications for others, and who are either independent developers or who work for companies with less than 10 employees, can also optionally take advantage of the Microsoft WebSiteSpark program.  This program gives you three copies of Visual Studio 2010 Professional, 1 copy of Expression Studio, and 4 CPU licenses of both Windows 2008 R2 Web Server and SQL 2008 Web Edition that you can use to both develop and deploy applications with at no cost for 3 years.  At the end of the 3 years there is no obligation to buy anything.  You can sign-up for WebSiteSpark today in under 5 minutes – and immediately have access to the products to download. Summary Today’s release is a big one – and has a bunch of improvements for pretty much every developer.  Thank you everyone who provided feedback, suggestions and reported bugs throughout the development process – we couldn’t have delivered it without you.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >