Search Results

Search found 11397 results on 456 pages for 'guest session'.

Page 429/456 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • Why does pinging a local router return "Destination Host Unreachable"?

    - by Matt H
    I have two tomato routers. One is bridged wirelessly with the other. I have a new server on the network. It's running Ubuntu Server 11.04. They are all connected like this: A - Linux PC B - New Server C - Mac Mini D - Macbook T1 - Tomato 1 T2 - Tomato 2 They are connected like so: A -----+-T1 ==== wireless bridge ==== T2----- ADSL modem | | C & D Connected wirelessly to T2 B -----+ A, C & D do not experience any issues. I have an active SSH session to B from A and it's not experiencing any loss. B, the new server occasionally cannot ping T2 and therefore cannot connect to the internet. However, A can always contact B and B can ping A and B When the network is lost, B can still ping T1, but not T2 yet at the same as B has lost connection to T2, A can still ping T2. Any ideas on what this could be? there is nothing that gives any clues in any of the logs on either router or the linux server. One thing that is interesting is that I set up a ping running between B and T2. T2 has the IP address 192.68.1.1 Here is what I am seeing: From 192.168.1.1 icmp_seq=26 Destination Host Unreachable From 192.168.1.1 icmp_seq=27 Destination Host Unreachable From 192.168.1.1 icmp_seq=28 Destination Host Unreachable From 192.168.1.1 icmp_seq=29 Destination Host Unreachable From 192.168.1.1 icmp_seq=30 Destination Host Unreachable From 192.168.1.1 icmp_seq=31 Destination Host Unreachable From 192.168.1.1 icmp_seq=33 Destination Host Unreachable From 192.168.1.1 icmp_seq=34 Destination Host Unreachable From 192.168.1.1 icmp_seq=35 Destination Host Unreachable 64 bytes from 192.168.1.1: icmp_req=36 ttl=63 time=3.40 ms 64 bytes from 192.168.1.1: icmp_req=37 ttl=63 time=5.70 ms 64 bytes from 192.168.1.1: icmp_req=38 ttl=63 time=2.25 ms 64 bytes from 192.168.1.1: icmp_req=39 ttl=63 time=2.18 ms 64 bytes from 192.168.1.1: icmp_req=40 ttl=63 time=3.12 ms 64 bytes from 192.168.1.1: icmp_req=41 ttl=63 time=2.15 ms 64 bytes from 192.168.1.1: icmp_req=42 ttl=63 time=1.97 ms 64 bytes from 192.168.1.1: icmp_req=43 ttl=63 time= And it cycles to being reachable and not. So I guess you could say the question is, why is the router responding that it cannot be reached?

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • RDP exits immediately after connecting to Windows Server 2008 R2

    - by carpat
    Background: I recently got a Windows cloud VPS server. I don't have much experience with server admin (I'm a programmer), and what little I do have is with linux servers. Ever since getting the server I've been having issues with RDP. I can connect about two or three times, after which point I can't connect until one of the tech guys "fixes" it (see below). When I connect, I can stay connected for hours with no problem. When the problem connecting starts, the first time I try to log in, the remote desktop window pops up, starts connecting, and then exits with "Your Remote Desktop session has ended". After that, for about 10-20 minutes if I try to connect again, the connections times out with Remote Desktop can't connect to the computer for one of these reasons: 1) Remote access on the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network then goes back to connecting once and immediately disconnecting. All of the updates are installed. The firewall has been correctly configured to let RDP traffic through. The remote setting is "Allow connections from computers running any version of Remote Desktop". I tried creating a second user, and when I can't connect, I can't connect to that user either. I've tried both soft and hard reboots, neither of which help. I've tried connecting from two different computers (both running Windows 7) from two different networks (work and home), and the behavior is the same. Everything else on the server continues to run fine (IIS-served http pages, Tomcat-served java pages, svn, ping). The "fix" that the tech guys supply is simply logging into the console on their end, after which point I can connnect 2 or 3 times again. The event viewer on the server has "authentication failure" (or something similar) events generated when I attempt to log in and can't. I can't get to the actual event at the moment as I'm currently in the can't connect stage, and waiting for the techs to log in. But when I searched for the event earlier this morning I couldn't find anything useful. Can anyone help?

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • What would cause my SendMail server not to acknowledge receiving a TCP Sequence?

    - by Mike B
    My TCP/IP Stack knowledge is a little rusty so please bear with me.... I have a CentOS 5.7 server with SendMail and am having seeing intermittent timeout issues sending email (particularly larger email) to other remote domains. It doesn't happen with all attachments or recipient domains. Just some. After some extended troubleshooting, I think I've narrowed it down to TCP Sequences not being acknowledged. Here's a breakdown of the TCP session from a packet capture I collected directly on my MTA (fooMTA): Packet 1 - 11: Standard TCP handshake followed by initial SMTP conversation. No errors. Packet #12 Recipient MTA: TCP sequence 231. Ack 91. Packet #13 FooMTA: TCP sequence 91. Ack 305. Packet #14 FooMTA: TCP sequence 1115. Ack 305. Packet #15 Recipient MTA: TCP sequence 305. Ack 2495. Packet #16 FooMTA: TCP sequence 2495. Ack 305. Packet #17 FooMTA: TCP sequence 5255. Ack 305. Packet #18: Recipient MTA: TCP sequence 305. Ack 5255. Packet #19: FooMTA: TCP sequence 6635. Ack 305. Packet #20: FooMTA: TCP sequence 8015. Ack 305. Packet #21: Recipient MTA: TCP Sequence 305. Ack 8015. Packet #22: FooMTA: TCP Sequence 10775. Ack 305. Packet #23: FooMTA: TCP Sequence 13535. Ack 305. Packet #24: Recipient MTA: TCP sequence 305. Ack 10775 Packet #25: FooMTA: TCP Sequence 14915. Ack 305 It keeps going like this with my server still thinking it hasn’t received sequence 305… in response the remote side eventually retransmits its prior data thinking that it never arrived. Eventually the gap gets so large that no new data is sent and the remote MTA keeps retransmitting old stuff. This contributes to an exponential backoff and eventually the remote side gives up. What’s strange to me is that I see the “missing” TCP sequence (305 in this case) arriving back to my server (via a packet capture collected directly from fooMTA) So I don’t get why my server keeps asking for it. Could this be firewall related? What would be the next step in troubleshooting?

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • Windows 7 booting and startup repair issues

    - by aardvark
    I have a MSI FR720 laptop with Windows 7 and Lubuntu partions. For quite a while (6 months or so) I've been having issues booting from my hard drive, it'd take me between 5 minutes and several hours for me to be able to have it recognize the hard drive as a bootable device. I did several disk checks on it, and my hard drive seems in perfect condition, and the fact that booting would usually only work after removing the hard drive and trying to reset it in its slot or lightly shaking it makes me think it had something to do with the connection in the hard drive slot as opposed to the hard drive itself. I was having particular issues with it detecting the hard drive today so I decided to try booting it from an external hard drive dock. It detected it first try and so far has had no problems finding the bootable partitions on my hard drive. When I selected my Windows 7 partition from the boot menu it said that it hadn't been shut down properly last time and needed startup repair. I've done this several times over the last 6 months, so this is hardly unusual. I do startup repair, it fails, and then I try to do a system restore. The system restore also failed, and it says that no files were changed. I restart and try it again. However, this time when I get to the startup repair it's not detecting a Windows OS. I tried clicking next and doing a startup repair but the repair is always failing. If I ignore the startup repair option and instead select "Launch windows normally" it will get to the windows animation, stop halfway through and then crash into a BSoD. I can't read the error on the screen because it immediately switches to back and tries to restart. This is my first time asking a question like this online, so let me know if I need to provide any extra information and I'll do my best to give it I tried using diskpart to find the list of partitions and see if one's labelled as an active partition, but it says that no disk were detected. I can run Lubuntu just fine. I can also see all of my Windows 7 files from it EDIT: The startup repair diagnosis and repair log is this: -- Number of repair attempts: 1 Session details System Disk = Windows directory = AutoChk Run = 0 Number of root causes = 1 Test Performed: Name: Check for updates Result: Completed successfully. Error code = 0x0 Time taken = 15ms Test Performed: Name: System disk test Result: Completed successfully. Error code = 0x0 Time taken = 31ms Root cause found. If a hard disk is installed, it is not responding. -- Any chance that this is a result of me doing this through an external dock through a USB drive?

    Read the article

  • Wrapping a point-to-point link

    - by user3712955
    I'm using a pair of IP radios (non-WiFi) to bridge my office engineering LAN (172.0.0.0/8) to a lab in another building. The radios work fine, but they expose a web management interface I'd like to hide, and they also generate traffic (ARP, STP, and more) that I need to keep off my (very, very clean) LAN segments. I have some ARM-Linux boards (similar to Beagle/Panda/RasPi) running Ubuntu, and I've put one at each end of the link, between the radio and the LAN. Each of the boards has 2 wired Ethernet interfaces, eth0 and eth1. The LAN segments are connected to eth0, and the radios are connected to eth1. I'd like to accomplish the following: Keep radio-originated traffic off my LAN segments! Hide all services provided by the radio (web, ssh, etc.) Transparently pass all traffic between the LAN segments (including things like ARP). The above also applies to the ARM-Linux boards: No stray traffic my LAN from them either! I'd like the system to look like a switch: LAN packets arriving at one eth0 appear at the other. And neither eth0 should have an IP address: The working system should behave like a CAT6 cable with some latency (instead of ARM boards and radios). Unfortunately, I'm confused about how to properly configure the ARM Ubuntu systems. What I'm guessing I need is a bridge on each board (br0?) and a VLAN (vlan0 or eth0.0?) to isolate the LAN traffic from everything else as it passes through the ARM boards and the radios. Then I need some kind of a firewall to block sending anything out eth0 that isn't from the other eth0 (via the VLAN). I've looked at the ip and ebtables commands (especially -t broute). While the concepts sorta-kinda make sense, I'm completely lost in the details. Edit: In the perverse case that a system on one of my LAN segments has the same IP address as one of the radios, or as eth1 on the ARM-Ubuntu boards, a VLAN won't work. Which I believe means I need to tunnel all traffic between the two eth0 interfaces to get that "like a wire" behavior. Help? Finally, I'd like to have a way to temporarily expose services on the ARM boards (ssh) and the radios (web) for maintenance purposes. Ideally, it would expose an IP address with ssh available on port 22. Once connected, I figure I'd start an X11 session and run a browser on the ARM board to access the radios. Or something. I could login via the console to enable/disable this, or perhaps could use a GPIO to trigger a script. I feel I've identified most of the pieces needed to make all this happen, but I have no idea how to combine them to make a working system. Thanks!

    Read the article

  • Select number of rows for each group where two column values makes one group

    - by Fábio Antunes
    I have a two select statements joined by UNION ALL. In the first statement a where clause gathers only rows that have been shown previously to the user. The second statement gathers all rows that haven't been shown to the user, therefore I end up with the viewed results first and non-viewed results after. Of course this could simply be achieved with the same select statement using a simple ORDER BY, however the reason for two separate selects is simple after you realize what I hope to accomplish. Consider the following structure and data. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 1 | 1 | 10 | true | .... | | 2 | 10 | 1 | true | .... | | 3 | 1 | 10 | true | .... | | 4 | 6 | 8 | true | .... | | 5 | 1 | 10 | true | .... | | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | | 13 | 10 | 1 | false | .... | | 14 | 1 | 10 | false | .... | | 15 | 6 | 8 | false | .... | | 16 | 10 | 1 | false | .... | | 17 | 8 | 6 | false | .... | | 18 | 3 | 2 | false | .... | +----+------+-----+--------+------+ Basically I wish all non viewed rows to be selected by the statement, that is accomplished by checking weather the viewed column is true or false, pretty simple and straightforward, nothing to worry here. However when it comes to the rows already viewed, meaning the column viewed is TRUE, for those records I only want 3 rows to be returned for each group. The appropriate result in this instance should be the 3 most recent rows of each group. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | +----+------+-----+--------+------+ As you see from the ideal result set we have three groups. Therefore the desired query for the viewed results should show a maximum of 3 rows for each grouping it finds. In this case these groupings were 10 with 1 and 8 with 6, both which had three rows to be shown, while the other group 2 with 3 only had one row to be shown. Please note that where from = x and to = y, makes the same grouping as if it was from = y and to = x. Therefore considering the first grouping (10 with 1), from = 10 and to = 1 is the same group if it was from = 1 and to = 10. However there are plenty of groups in the whole table that I only wish the 3 most recent of each to be returned in the select statement, and thats my problem, I not sure how that can be accomplished in the most efficient way possible considering the table will have hundreds if not thousands of records at some point. Thanks for your help. Note: The columns id, from, to and viewed are indexed, that should help with performance. PS: I'm unsure on how to name this question exactly, if you have a better idea, be my guest and edit the title.

    Read the article

  • Rails: (Devise) Two different methods for new users?

    - by neezer
    I have a Rails 3 app with authentication setup using Devise with the registerable module enabled. I want to have new users who sign up using our outside register form to use the full Devise registerable module, which is happening now. However, I also want the admin user to be able to create new users directly, bypassing (I think) Devise's registerable module. With registerable disabled, my standard UsersController works as I want it to for the admin user, just like any other Rail scaffold. However, now new users can't register on their own. With registerable enabled, my standard UsersController is never called for the new user action (calling Devise::RegistrationsController instead), and my CRUD actions don't seem to work at all (I get dumped back onto my root page with no new user created and no flash message). Here's the log from the request: Started POST "/users" for 127.0.0.1 at 2010-12-20 11:49:31 -0500 Processing by Devise::RegistrationsController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"18697r4syNNWHfMTkDCwcDYphjos+68rPFsaYKVjo8Y=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "role"=>"manager"}, "commit"=>"Create User"} SQL (0.9ms) ... User Load (0.6ms) SELECT "users".* FROM "users" WHERE ("users"."id" = 2) LIMIT 1 SQL (0.9ms) ... Redirected to http://test-app.local/ Completed 302 Found in 192ms ... but I am able to register new users through the outside form. How can I get both of these methods to work together, such that my admin user can manually create new users and guest users can register on their own? I have my Users controller setup for standard CRUD: class UsersController < ApplicationController load_and_authorize_resource def index @users = User.where("id NOT IN (?)", current_user.id) # don't display the current user in the users list; go to account management to edit current user details end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:notice] = "#{ @user.email } created." redirect_to users_path else render :action => 'new' end end def edit end def update params[:user].delete(:password) if params[:user][:password].blank? params[:user].delete(:password_confirmation) if params[:user][:password].blank? and params[:user][:password_confirmation].blank? if @user.update_attributes(params[:user]) flash[:notice] = "Successfully updated User." redirect_to users_path else render :action => 'edit' end end def delete end def destroy redirect_to users_path and return if params[:cancel] if @user.destroy flash[:notice] = "#{ @user.email } deleted." redirect_to users_path end end end And my routes setup as follows: TestApp::Application.routes.draw do devise_for :users devise_scope :user do get "/login", :to => "devise/sessions#new", :as => :new_user_session get "/logout", :to => "devise/sessions#destroy", :as => :destroy_user_session end resources :users do get :delete, :on => :member end authenticate :user do root :to => "application#index" end root :to => "devise/session#new" end

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • The ASP.NET Daily Community Spotlight - How posts get there, and how to make it your Visual Studio Start Page

    - by Jon Galloway
    One really cool part of my job is selecting the articles for the Daily Community Spotlight, on the home page of the ASP.NET website. The spotlight highlights a new post about ASP.NET development every day from a member of the ASP.NET community. You can find it on the home page of the ASP.NET site, at http://asp.net These posts aren't automatically drawn from a pool of RSS feeds or anything - I pick a new post for each day of the year. How I pick the posts I have a few important selection criteria: Interesting to well rounded ASP.NET developers The ASP.NET website has a lot of material for all skill and experience levels, from download / get started to advanced. I try to select community spotlight posts to round that out with fresh and timely information that working ASP.NET developers can really use. Posts highlight solutions to common problems, clever projects and code that helps you leverage ASP.NET, and important announcements about things you can use today. As part of that, I try to mix between ASP.NET MVC, Web Forms, and Web Pages (a.k.a. WebMatrix). As a professional developer, I want to keep on top of all of my options for ASP.NET development, and the common platform base they all share generally means that good ASP.NET code is good ASP.NET code. Exposing new and non-Microsoft community members as much as possible The exercise of selecting good ASP.NET community posts every day of the year has made me think about what the community is. Given the choice, I'll always favor non-Microsoft employees, but since Microsoft often hires ASP.NET community members and MVP's (myself included), I really think that the ASP.NET community includes developers who are using and writing about ASP.NET, both inside and outside of Microsoft. I'm especially excited about the opportunity to highlight new and lesser known bloggers. Usually being featured on the ASP.NET Community Spotlight gives a pretty good traffic bump, and I love being able to both provide great content to the community and encourage lesser known community members by giving them some (much deserved) attention. Announcements only when they're useful to working developers - not marketing Some of the posts are announcements about new releases, such as Scott Hanselman's post on ASP.NET Universal Providers for Session, Memebership, and Roles. I include those when I think they're interesting and of immediate use to you on projects. I occasionally get asked to link to new content from a team at Microsoft; if it's useful and timely content I'll ask them to point me to a blog post by an actual person rather than a faceless team. How the posts are managed This feed used to be managed by an internal spreadsheet on a Sharepoint site, which was painful for a lot of reasons. I took a cue from Jon Udell, who uses of a public Delicious feed feed for his Elm City project, and we moved the management of these posts over to a Delicious feed as well. You can hear more about Jon's use of Delicious in Elm City in our Herding Code interview - still one of my favorite interviews. We ended up with a simpler scenario, but Note: I watched the Yahoo/Delicious news over the past year and was happy to see that Delicious was recently acquired by the founders of YouTube. I investigated several other Delicious competitors, but am happy with Delicious for now. My Delicious feed here: http://www.delicious.com/jon_galloway You can also browse through this past year's ASP.NET Community Spotlight posts using the (pretty cool) Delicious Browse Bar Submitting articles I'm always on the lookout for new articles to feature. The best way to get them to me is to share them via Delicious. It's pretty easy - sign up for an account, then you can add a post and share it to me. Alternatively, you can send them to me via Twitter (@jongalloway) or e-mail (). If you do e-mail me, it helps to include a short description and your full name so I can credit you. Way too many developer blogs don't include names and pictures; if I can't find them I can't feature the post. Subscribing to the Community Spotlight feed The Community Spotlight is available as an RSS feed, so you might want to subscribe to it: http://www.asp.net/rss/spotlight Setting the ASP.NET Community Spotlight feed as your Visual Studio start page If you're an ASP.NET developer, you might consider setting the ASP.NET Community Spotlight as the content for your Visual Studio Start Page. It's really easy - here's how to do it in Visual Studio 2010: Display the Visual Studio Start Page if it's not already showing (View / Start Page) Click on the Latest News tab and enter the following RSS URL: http://www.asp.net/rss/spotlight If you didn't previously have RSS feeds enabled for your start page, click the Enable RSS Feed button Now, every time you start up Visual Studio you'll see great content from members of the ASP.NET community: You can also configure - and disable, if you'd like - the Visual Studio start page in the Tools / Options / Environment / Startup dialog. Credits I'll do a follow-up highlighting some places I commonly find great content for the feed, but I'd like to specifically point out two of them: Elijah Manor posts a lot of great content, which is available in his Twitter feed at @elijahmanor, on his Delicious feed, and on a dedicated website - Web Dev Tweets Chris Alcock's The Morning Brew is a must-read blog which highlights each day's best blog posts across the .NET community. He's an absolute machine, and no matter how obscure the post I find, I can guarantee he'll find it as well if he hasn't already. Did I say must read?

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • Oracle Open World 2012: SQL Developer Recap

    - by thatjeffsmith
    Last week was the ‘big show’ in San Francisco. I was very happy to meet many of you in person. And many of you had questions – lots of questions! We had full or overflowing rooms for our sessions and hands-on-labs. The SQL Developer ‘booths’ were also slammed several times. So exciting to see so many of YOU excited about SQL Developer. It’s very cool to hear the stories of our tools saving you and your organizations so much time (and money!) Instead of doing a Day 0 – Day 9 recap, I thought I’d share with you the questions that I heard more than once. And just for giggles, I’ll throw in some answers as well So in no particular order… What’s the difference between Oracle SQL Developer & Oracle SQL Developer Data Modeler? Mathematically speaking – two words. But as far as the actual modeling features go, there’s no difference between the two applications. The same ‘code’ or features as it pertains to data modeling and design are in both tools. However, in SQL Developer you have all of the OTHER features fighting for real estate in the UI. So I have a general rule of thumb – if you spend MOST of your time in the database, use SQL Developer. And if you spend most of your time in the data model, run the separate and dedicated program, Oracle SQL Developer Data Modeler. Here’s a couple of screenshots to drive home the UI point: Oracle SQL Developer Oracle SQL Developer Data Modeler running INSIDE of SQL Developer. Notice how the Modeler menu items fold under the file menu? Oracle SQL Developer Data Modeler Easier to navigate and manipulate your models with the stand alone modeler. Just no worksheet to run your ad-hoc queries, etc. Don’t forget you can disable the Data Modeler inside of SQL Developer via the Extensions preference page. How can I model my table partitions? Partitioning is defined via the Physical model. So after you have finished your relational model, you need to generate a physical model. Oracle SQL Developer Data Modeler Physical Model and Partitioning Open the properties for your physical model table. Enable the ‘partitioned’ property. Once you do so, the ‘Partitioning’ page will activate. Lots and lots of partitioning support and options here But what about Interval Partitioning? An extension of range partitioning in 11gR2, we don’t currently support this partitioning scheme in SQL Developer. But we’re working on it! Can SQL Developer ignore column order when comparing models? Yes! After you start a model compare, one of your options is to disregard the order of an attribute or column definition. Tell SQL Developer you don’t care when your column shows up, just as long as it DOES show up. Wow, you got a lot of questions around modeling! Is that normal? Yes! While we appreciate that many folks inherit their applications and associated designs, new applications are being ‘born’ every day. Since both of our tools are free for anyone to design their new Oracle applications with, we attract a fair amount of attention I want to do a Hands On Lab. How do I get your software and instructional guides? Go here. Download VirtualBox. Then download the VB image. Import the appliance. Start it. Connect oracle/oracle on the OEL VM. Click on ‘Start Here’ in the desktop. Follow the instructions. If you need help, ask away! You went too fast in your Tips & Tricks session. Do you have cliff notes? Yes! And you’re SO close to finding them! Just go to my SQL Developer resources page. All of my tips are documented on this blog somewhere. I’ve indexed the most popular ones on the resource page. You can use the Search dialog on the right to find the rest. Or just send me a comment or question, and I’ll do my best to answer them as they come in.

    Read the article

  • Chester Devs Presentation and source code &ndash; &lsquo;Event Store - an introduction to a DSD for event sourcing and notifications&rsquo;

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/11/11/chester-devs-presentation-and-source-code-ndash-lsquoevent-store.aspxThank you everyone at Chester Devs Thanks to Fran Hoey and all the people from Chester Devs. It was a hard drive up and back but the enthusiasm of the audience, with some great questions does make it worthwhile. Presentation and source code My presentation, source code, Event Store runners and text files containing the various command line parameters used for curl is now available on GitHub; https://github.com/westleyl/ChesterDevs-EventStore. Don’t worry if you don’t have a GitHub account, you don’t need one, you can just click on the Download Zip button on the right hand menu to download all the files as a single ZIP file.  If all you want is the PowerPoint presentation, go to https://github.com/westleyl/ChesterDevs-EventStore/blob/master/Powerpoint/Huddle-EventStore.pptx, and click on the View Raw button. Downloading and installing Event Store and Tools Download Event Store http://download.geteventstore.com – I unzipped these files into C:\EventStore\v2.0.1 Download Curl from http://curl.haxx.se/download.html – I downloaded Win64 Generic (with SSL) and unzipped these files into C:\curl version 7.31.0 Running the tools I used in my presentation Demonstration 1 (running Event Store) You can use one of my Event Store runner command files to run the single node version of Event Store, using default ports of 2213 for HTTP and 1113  for TCP, and with a wildcard HTTP pattern.  Both take a single command line parameter to specify the location of the data and log files.  The runners assume the single node executable is located in C:\EventStore\v2.0.1, and will placed data files and logs beneath C:\EventStore\Data, i.e. RunEventStore.cmd TestData1 This will create data files in C:\EventStore\Data\TestData1\Data and log files in C:\EventStore\Data\TestData1\logs. If, when running Event Store you may see the following message, [03288,15,06:23:00.622] Failed to start http server Access is denied You will either need to run Event Store in an administrator console window, or you can use the netsh command to create a firewall permission to allow HTTP listening (this will need to be run, once, in an administrator console window), netsh http add urlacl url=http://*:2213/ user=liam You can always delete this later by running the delete; netsh http delete urlacl url=http://*:2213/ If you want to confirm that everything is running OK, open the management console in a browser by navigating to http://127.0.0.1:2213. If at any point you are asked for a user name and password use the default of ‘admin’/‘changeit’. Demonstration 2 (reading and adding data, curl) In my second demonstration I used curl directly from the console to read streams, write events and then read back those events. On GitHub I have included is a set of curl commands, CurlCommandLine.txt, and a sample data file, SampleData.json, to load an event into a DDDNorth3 stream. As there is not much data in the Event Store at this point I used the $stats-127.0.0.1:2113 which is a stream containing performance statistics for Event Store and is updated every 30 seconds (default). Demonstration 3 (projections) On GitHub I have included a sample projection, Projection-ByRoom.txt, which will create streams based on the room on which a session was held on the DDDNorth3 agenda. Browse to the management console, http://127.0.0.1:2213.  Click on Projections, New Projection, give it a name, Sessions-ByRoom, and copy in the JavaScript in the Projection-ByRoom.txt file.  Select Continuous, tick Emit Enabled and then click on Post. It should run immediately. You may by challenged for the administration login for the management console, if so use the default user name and password; 'admin'/'changeit'. Demonstration 4 (C# client) The final demonstration was the Visual Studio 2012 project using the Event Store client – referenced directly as C:\EventStore\v2.0.1\EventStore.ClientAPI.dll, although you can switch this to the latest Event Store client NuGet package. The source code provides a console app for viewing projections with the projection manager (HTTP connection), as well as containing a full set of data for the entire DDDNorth3 agenda.  It also deals with the strategy for reading newest events backwards to older events and ignoring older events that have been superseded. Resources Event Store home page: http://www.geteventstore.com/ Event Store source code on GitHub: https://github.com/eventstore/eventstore Event Store documentation on GitHub: https://github.com/eventstore/eventstore/wiki (includes index to @RobAshton’s blog series on Event Store at https://github.com/eventstore/eventstore/wiki#rob-ashton---projections-series) Event Store forum in Google Groups: https://groups.google.com/forum/?fromgroups#!forum/event-store TopShelf Windows service wrapper is available on github: https://gist.github.com/trbngr/5083266

    Read the article

  • DDD North 3 Presentation and source code &ndash; &lsquo;Event Store - an introduction to a DSD for event sourcing and notifications&rsquo;

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/15/ddd-north-3-presentation-and-source-code-ndash-lsquoevent-store.aspxThank you everyone at DDD North Thanks to all the people who helped organise the cracking conference that is DDD North 3, returning to Sunderland, and the great facilities at the University of Sunderland, and the fine drinks reception at Sunderland Software City.  The whole event wouldn’t be possible without the sponsors who ensured over 400 people were kept fed and watered so they could enjoy the impressive range of sessions. And lastly, a thank you to all those delegates who gave up their free time on a Saturday to spend a day dashing between lecture rooms, including a late change to my room which saw 40 people having to brave a journey between buildings in the fine drizzle. The enthusiasm from the delegates always helps recharge my geek batteries. Presentation and source code My presentation, source code, Event Store runners and text files containing the various command line parameters used for curl is now available on GitHub; https://github.com/westleyl/DDDNorth3-EventStore. Don’t worry if you don’t have a GitHub account, you don’t need one, you can just click on the Download Zip button on the right hand menu to download all the files as a single ZIP file.  If all you want is the PowerPoint presentation, go to https://github.com/westleyl/DDDNorth3-EventStore/blob/master/Powerpoint/DDDNorth-EventStore.pptx, and click on the View Raw button. Downloading and installing Event Store and Tools Download Event Store http://download.geteventstore.com – I unzipped these files into C:\EventStore\v2.0.1 Download Curl from http://curl.haxx.se/download.html – I downloaded Win64 Generic (with SSL) and unzipped these files into C:\curl version 7.31.0 Running the tools I used in my presentation Demonstration 1 (running Event Store) You can use one of my Event Store runner command files to run the single node version of Event Store, using default ports of 2213 for HTTP and 1113  for TCP, and with a wildcard HTTP pattern.  Both take a single command line parameter to specify the location of the data and log files.  The runners assume the single node executable is located in C:\EventStore\v2.0.1, and will placed data files and logs beneath C:\EventStore\Data, i.e. RunEventStore.cmd TestData1 This will create data files in C:\EventStore\Data\TestData1\Data and log files in C:\EventStore\Data\TestData1\logs. If, when running Event Store you may see the following message, [03288,15,06:23:00.622] Failed to start http server Access is denied You will either need to run Event Store in an administrator console window, or you can use the netsh command to create a firewall permission to allow HTTP listening (this will need to be run, once, in an administrator console window), netsh http add urlacl url=http://*:2213/ user=liam You can always delete this later by running the delete; netsh http delete urlacl url=http://*:2213/ If you want to confirm that everything is running OK, open the management console in a browser by navigating to http://127.0.0.1:2213. If at any point you are asked for a user name and password use the default of ‘admin’/‘changeit’.   Demonstration 2 (reading and adding data, curl) In my second demonstration I used curl directly from the console to read streams, write events and then read back those events. On GitHub I have included is a set of curl commands, CurlCommandLine.txt, and a sample data file, SampleData.json, to load an event into a DDDNorth3 stream. As there is not much data in the Event Store at this point I used the $stats-127.0.0.1:2113 which is a stream containing performance statistics for Event Store and is updated every 30 seconds (default). Demonstration 3 (projections) On GitHub I have included a sample projection, Projection-ByRoom.txt, which will create streams based on the room on which a session was held on the DDDNorth3 agenda. Browse to the management console, http://127.0.0.1:2213.  Click on Projections, New Projection, give it a name, Sessions-ByRoom, and copy in the JavaScript in the Projection-ByRoom.txt file.  Select Continuous, tick Emit Enabled and then click on Post. It should run immediately. You may by challenged for the administration login for the management console, if so use the default user name and password; 'admin'/'changeit'.   Demonstration 4 (C# client) The final demonstration was the Visual Studio 2012 project using the Event Store client – referenced directly as C:\EventStore\v2.0.1\EventStore.ClientAPI.dll, although you can switch this to the latest Event Store client NuGet package. The source code provides a console app for viewing projections with the projection manager (HTTP connection), as well as containing a full set of data for the entire DDDNorth3 agenda.  It also deals with the strategy for reading newest events backwards to older events and ignoring older events that have been superseded. Resources Event Store home page: http://www.geteventstore.com/ Event Store source code on GitHub: https://github.com/eventstore/eventstore Event Store documentation on GitHub: https://github.com/eventstore/eventstore/wiki (includes index to @RobAshton’s blog series on Event Store at https://github.com/eventstore/eventstore/wiki#rob-ashton---projections-series) Event Store forum in Google Groups: https://groups.google.com/forum/?fromgroups#!forum/event-store TopShelf Windows service wrapper is available on github: https://gist.github.com/trbngr/5083266

    Read the article

  • Bluetooth DUN Tethering fails

    - by tacone
    I have an HTC Desire HD, with Android Froyo (2.2) and PDANet installed. I am using Ubuntu 10.10. I cannot tether it over Bluetooth either with Network Manager or BlueMan. (note, I installed Blueman only after failing with NetWork manager, and I even tried the last version from the PPA). With both my device is discovered, paired, setup. But connecting always fail. Network manager says it cannot get the details of my device Blueman says Connection Refused (111) Here are some relevant entries from syslog. Mar 11 22:13:00 tacone-macbook bluetoothd[2242]: Bluetooth deamon 4.69 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting SDP server Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting experimental netlink support Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to find Bluetooth netlink family Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to init netlink plugin Mar 11 22:13:00 tacone-macbook kernel: [ 158.284357] Bluetooth: L2CAP ver 2.14 Mar 11 22:13:00 tacone-macbook kernel: [ 158.284361] Bluetooth: L2CAP socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.446781] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 Mar 11 22:13:00 tacone-macbook kernel: [ 158.446784] Bluetooth: BNEP filters: protocol multicast Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 registered Mar 11 22:13:00 tacone-macbook kernel: [ 158.569481] Bluetooth: SCO (Voice Link) ver 0.6 Mar 11 22:13:00 tacone-macbook kernel: [ 158.569484] Bluetooth: SCO socket layer initialized Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 up Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting security manager 0 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: ioctl(HCIUNBLOCKADDR): Invalid argument (22) Mar 11 22:13:00 tacone-macbook kernel: [ 158.818600] Bluetooth: RFCOMM TTY layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818607] Bluetooth: RFCOMM socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818610] Bluetooth: RFCOMM ver 1.11 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Adapter /org/bluez/2242/hci0 has been enabled Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from ListDevices reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:13:00 tacone-macbook NetworkManager[1247]: <warn> bluez error getting adapter properties: Rejected send message, 1 matched rules; type="method_call", sender=":1.4" (uid=0 pid=1247 comm="NetworkManager) interface="org.bluez.Adapter" member="GetProperties" error name="(unset)" requested_reply=0 destination="org.bluez" (uid=0 pid=2242 comm="/usr/sbin/bluetoothd)) Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: return_link_keys (sba=00:23:6C:B5:03:6F, dba=00:23:6C:C0:F1:B0) Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:02 tacone-macbook bluetoothd[2243]: Discovery session 0x2262d7c0 with :1.45 activated Mar 11 22:15:15 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:15 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: io_capa_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:17 tacone-macbook bluetoothd[2243]: io_capa_response (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:18 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_notify (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE, type=5) Mar 11 22:15:28 tacone-macbook kernel: [ 306.585725] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.630757] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: Authentication requested Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:28 tacone-macbook kernel: [ 306.784829] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.857861] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: last message repeated 8 times Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:30 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0): probe requested by plugin 'Generic' Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:49 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:16:15 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:16:19 tacone-macbook kernel: [ 357.375108] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.169506] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.215529] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.281559] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.330588] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Any help ? PS: tethering via USB or WiFi is not an option, I need to do it over Bluetooth.

    Read the article

  • Silverlight Cream for December 05, 2010 -- #1003

    - by Dave Campbell
    In this (Almost) All-Submittal Issue: John Papa(-2-), Jesse Liberty, Tim Heuer, Dan Wahlin, Markus Egger, Phil Middlemiss, Coding4Fun, Michael Washington, Gill Cleeren, MichaelD!, Colin Eberhardt, Kunal Chowdhury, and Rabeeh Abla. Above the Fold: Silverlight: "Two-Way Binding on TreeView.SelectedItem" Phil Middlemiss WP7: "Taking Screen Shots of Windows Phone 7 Panorama Apps" Markus Egger Training: "Beginners Guide to Visual Studio LightSwitch (Part - 4)" Kunal Chowdhury Shoutouts: Don't let the fire go out... check out the Firestarter Labs Bart Czernicki discusses the need for 64-bit Silverlight: Why a 64-bit runtime for Silverlight 5 Matters Laurent Duveau is interviewed by the SilverlightShow folks to discuss his WP7 app: Laurent Duveau on Morse Code Flash Light WP7 Application From SilverlightCream.com: John Papa: Silverlight 5 Features John Papa has a post up highlighting his take on what's cool in the new featureset for Silverlight 5... including an external link to the keynote. Silverlight Firestarter Keynote and Sessions John Papa also has posted links to all the individual session videos... what a great resource! Yet Another Podcast #17 – Scott Guthrie Jesse Liberty went big with his latest Yet Another Podcast ... he is interviewing Scott Guthrie about the Firestarter, Silverlight, WP7. and more. Silverlight 5 Plans Revealed With this post from Tim Heuer, I find myself adding a Silverlight 5 tag... so bring on the fun! ... unless you've been overloaded like I have since last Thursday, you've probably seen this, but what the heck... Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code Phoenix's own Dan Wahlin had a great WCF RIA Services presentation at the Firestarter last week, and his material and lots of other good links are up on his blog, and I'd say that even if he didn't have a couple shoutouts to me in it :) Thanks Dan!! Taking Screen Shots of Windows Phone 7 Panorama Apps Markus Egger helps us all out with a post on how to get screenshots of your WP7 Panorama app... in case you haven't tried it ... it's not as easy as it sounds! Two-Way Binding on TreeView.SelectedItem Phil Middlemiss is back with a post taking some of the mystery out of the TreeView control bound to a data context and dealing with the SelectedItem property... oh yeah, and throw all that into MVVM! Great tutorial as usual, a cool behavior, and all the source. Native Extensions for Microsoft Silverlight Alan Cobb pointed me to a quick post up on the Coding4Fun site about the NESL (Native Extensions for SilverLight) from Microsoft that give access to some cool features of Windows 7 from Silverlight... I added an NESL tag in case other posts appear on this subject. Silverlight Simple Drag And Drop / Or Browse View Model / MVVM File Upload Control Michael Washington has another great tutorial up at CodeProject that expands on prior work he'd done with drag/drop file upload with this post on integrating an updated browse/upload into ViewModel/MVVM projects, all of which is Blendable. The validation story in Silverlight (Part 1) In good news for all of us, Gill Cleeren has started a tutorial series at SilverlightShow on Silverlight Validation. The first one is up discussing the basics... The Common Framework MichaelD! has a WPF/Silverlight framework up with Facebook Authentication, Xaml-driven IOC, T4 synchronous WCF proxies, and WP7 on the roadmap... source on CodePlex, check it out and give him some feedback. Exploring Reactive Extensions (Rx) through Twitter and Bing Maps Mashups If you've been waiting around to learn Rx, Colin Eberhardt has the post up for you (and me)... great tutorial up on Twitter and Bing Maps Mashups ... and all the code... for the twitter immediate app, and also the UKSnow one we showed last week... check out the demo page, and grab the source! Beginners Guide to Visual Studio LightSwitch (Part - 4) Kunal Chowdhury has the 4th part of his Lightswitch tutorial series up at SilverlightShow. In this one, he shows how to integrate multiple tables into a screen. It is here Take Your Silverlight Application Full Screen & intercept all windows keys !! Rabeeh Abla sent me this link to the blog describing a COM exposed library that intercepts all keys when Silverlight is full-screen. There are a few I hit when I'm going through blogs that Ctrl-W (FF) just won't take down and that annoys me... so this might be a solution if you have that problem... worth a look anyway! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >