Search Results

Search found 28989 results on 1160 pages for 'modal view controller'.

Page 363/1160 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • Atheros AR9485 wireless card doesn't work in an ASUS K53E

    - by John
    I just installed Ubuntu 11.10 32-bit version in dual boot mode on an ASUS K53. Every thing seems to work fine except for the wireless. The wireless works on Windows 7. Ubuntu finds the wireless card, but it does not appear to be turned on. The only physical means of turning on the card is the FN-F2 key combo. That works on Windows, but not in Ubuntu. I've looked in the forums for a solution and I'm not quite sure what to do. I've gathered the following information: jdwbmc@Spatha:~$ cat /etc/lsb-release; uname -a DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10" Linux Spatha 3.0.0-12-generic #20-Ubuntu SMP Fri Oct 7 14:50:42 UTC 2011 i686 i686 i386 GNU/Linux jdwbmc@Spatha:~$ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Atheros Communications Inc. AR9485 Wireless Network Adapter [168c:0032] (rev 01) Subsystem: AzureWave Device [1a3b:1186] Kernel driver in use: ath9k -- 04:00.0 Ethernet controller [0200]: Atheros Communications AR8151 v2.0 Gigabit Ethernet [1969:1083] (rev c0) Subsystem: ASUSTeK Computer Inc. Device [1043:1851] Kernel driver in use: atl1c jdwbmc@Spatha:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 058f:a014 Alcor Micro Corp. jdwbmc@Spatha:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thrff Fragment thrff Power Managementff jdwbmc@Spatha:~$ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no jdwbmc@Spatha:~$ lsmod Module Size Used by parport_pc 32114 0 ppdev 12849 0 snd_hda_codec_hdmi 31426 1 bnep 17923 2 rfcomm 38408 0 bluetooth 148839 10 bnep,rfcomm snd_hda_codec_realtek 254125 1 binfmt_misc 17292 1 joydev 17393 0 asus_nb_wmi 12469 0 asus_wmi 19333 1 asus_nb_wmi sparse_keymap 13658 1 asus_wmi uvcvideo 67271 0 videodev 85626 1 uvcvideo snd_hda_intel 24262 2 snd_hda_codec 91754 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_i ntel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80468 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 wmi 18744 1 asus_wmi snd_rawmidi 25241 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event arc4 12473 2 i915 505108 3 snd_timer 28932 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq ath9k 112711 0 psmouse 73673 0 serio_raw 12990 0 mac80211 272785 1 ath9k drm_kms_helper 32889 1 i915 ath9k_common 13599 1 ath9k drm 192226 4 i915,drm_kms_helper ath9k_hw 293893 2 ath9k,ath9k_common ath 19387 2 ath9k,ath9k_hw cfg80211 172392 3 ath9k,mac80211,ath snd 55902 14 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_i ntel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,s nd_seq,snd_timer,snd_seq_device soundcore 12600 1 snd snd_page_alloc 14115 2 snd_hda_intel,snd_pcm mei 36466 0 i2c_algo_bit 13199 1 i915 video 18908 1 i915 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp ahci 21634 3 libahci 25727 1 ahci atl1c 36638 0 xhci_hcd 72915 0 jdwbmc@Spatha:/var/lib/NetworkManager$ cat NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true jdwbmc@Spatha:~$ lspci -nn | grep 0280 02:00.0 Network controller [0280]: Atheros Communications Inc. AR9485 Wireless Network Adapter [168c:0032] (rev 01) Any help would be appreciated. Thanks.

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Very slow KVM in Ubuntu 12.04

    - by Guy Fawkes
    I use Ubuntu 12.04 64-bit and KVM, my CPU is Core i5 3.3 GHz and I have 8 GB of DDR3 RAM. I run Windows 7 in KVM and it's extremely slow. My co-worker use Debian on the same PC configuration and can run Windows 7 extremely fast! Where can be my problem? sudo cat /etc/libvirt/qemu/windows.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit windows or other application using the libvirt API. --> <domain type='kvm'> <name>windows</name> <uuid>5c685175-baea-0ca6-591f-8269d923ffb8</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/windows.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='network'> <mac address='52:54:00:94:63:91'/> <source network='default'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='vga' vram='262144' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain>

    Read the article

  • Slow wired internet connection on Realtek RTL8168-8111 (Rev 6)

    - by Mark
    I keep on seeing issues people are having with their wireless. I am having an issue with being wired into my router. I installed Ubuntu 11.10 the other day, on my custom PC. The motherboard I have installed on this PC is an ASUS P8H61-M. The issue I am having is with my speed. I have a dual boot, windows 7 and the new Ubuntu. On my Windows install I am getting test speeds from Speakeasy at 17Mbps and actual downloads around 2-3MB/s. With Ubuntu, I am getting test speeds from Speakeasy at 1.14Mbps and actual downloads around 60KB/s. I have disabled IPv6 and am no using GoogleDNS for my DNS, but it hasn't resolved the issue. I have scanned my router (WRT54GS Linksys) to disable IPv6 connections, and I am not seeing any option for that. I cannot figure out why I am getting such sluggish internet connection. Any help to resolve would be great! I performed an iconfig -a with these results: mark@Mark-ASUS:~$ ifconfig -a eth0 Link encap:Ethernet HWaddr f4:6d:04:d1:2c:4e inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::f66d:4ff:fed1:2c4e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21888 errors:0 dropped:21888 overruns:0 frame:21888 TX packets:21068 errors:0 dropped:90 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:26348337 (26.3 MB) TX bytes:2217140 (2.2 MB) Interrupt:46 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:952 (952.0 B) TX bytes:952 (952.0 B) My specs are: mark@Mark-ASUS:~$ sudo lspci -nn 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) 06:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1080] (rev 01) udev information: KERNEL[11.351405] add /devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 (net) UDEV_LOG=3 ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 SUBSYSTEM=net INTERFACE=eth0 IFINDEX=2 SEQNUM=1542 UDEV [11.363905] add /devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 (net) UDEV_LOG=3 ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 SUBSYSTEM=net INTERFACE=eth0 IFINDEX=2 SEQNUM=1542 ID_VENDOR_FROM_DATABASE=Realtek Semiconductor Co., Ltd. ID_MODEL_FROM_DATABASE=RTL8111/8168B PCI Express Gigabit Ethernet controller ID_BUS=pci ID_VENDOR_ID=0x10ec ID_MODEL_ID=0x8168 ID_MM_CANDIDATE=1 dmesg information: [ 2.855982] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 2.856366] r8169 0000:04:00.0: eth0: RTL8168b/8111b at 0xffffc9000064c000, f4:6d:04:d1:2c:4e, XID 0c900800 IRQ 46 [ 12.540956] r8169 0000:04:00.0: eth0: link down [ 12.540961] r8169 0000:04:00.0: eth0: link down [ 12.541173] ADDRCONF(NETDEV_UP): eth0: link is not ready I took out a lot of information that did not pertain to eth0, because the previous edits wouldn't save. If I need any more information, let me know. I would love to get this resolved. The other issue I am noticing is sometimes my connection disconnects all together, for about a minute, then it reconnects.

    Read the article

  • Application Module Extension in Oracle Application R12

    - by Manoj Madhusoodanan
    In this blog I will describe how to Extend Application Module.I will explain this based on my previous blog PL/SQL based EO.  I want to extend FndUserAM to add a procedure to raise a custom business event when the FND_USER has created successfully. Here I am using a custom business event "xxcust.oracle.apps.demo_event". Please find the code used in Business Event. TablePackage Following steps needs to perform. 1) Download all files pertaining to "Entity Object Based on PL/SQL" to JDEV_USER_HOME/myprojects and JDEV_USER_HOME/myclasses.If you want to see the content of source java file decompile it and save it in JDEV_USER_HOME/myprojects. 2) Create XXFndUserAM as follows. 3) Add following method to XXFndUserAMImpl.    import oracle.apps.fnd.framework.OAException;   import oracle.apps.fnd.framework.server.OADBTransactionImpl;   import oracle.apps.fnd.wf.bes.BusinessEvent;   import oracle.apps.fnd.wf.bes.BusinessEventException;    import java.sql.Connection;     public void raiseEvent(String userName) {        String eventName = "xxcust.oracle.apps.demo_event";        String eventKey = userName;        Connection conn = ((OADBTransactionImpl)getOADBTransaction()).getJdbcConnection();         BusinessEvent event = null;         try{             event = new BusinessEvent(eventName, eventKey);             /* Setting Parameters */             event.setStringProperty("USER_NAME",userName);             event.setStringProperty("STATUS","User has created sucessfully");             event.raise(conn);             }             catch (BusinessEventException e) {                 throw new OAException("Exception occured when invoking web service - "+e.getMessage());             }             getOADBTransaction().commit();    } 4) Create a controller which extends from xxcust.oracle.apps.fnd.user.webui.CreateFndUserCO.Call raiseEvent method from new controller. 5) Create substitution for FndUserAM. 6) Migrate following files to $JAVA_TOP. xxcustom.oracle.apps.fnd.user.server.FndUserAMImpl.javaxxcustom.oracle.apps.fnd.user.server.XXFndUserAM.xmlxxcustom.oracle.apps.fnd.user.webui.XXCreateFndUserCO.java 8) Migrate the substitution. 9) Restart the server. 10) Personalize the page /xxcust/oracle/apps/fnd/user/webuiCreateFndUserPG and set the new controller. 11) Verify the substitution has properly applied by clicking About the Page. 12) Access the page and create a user. You can the the result of the Business Event.

    Read the article

  • Hyper-V for Developers Part 1 Internal Networks

    Over the last year, weve been working with Microsoft to build training and demo content for the next version of Office Communications Server code-named Microsoft Communications Server 14.  This involved building multi-server demo environments in Hyper-V, getting them running on demo servers which we took to TechEd, PDC, and other training events, and sometimes connecting the demo servers to the show networks at those events.  ITPro stuff that should scare the hell out of a developer! It can get ugly when I occasionally have to venture into ITPro land.  Lets leave it at that. Having gone through this process about 10 to 15 times in the last year, I finally have it down.  This blog series is my attempt to put all that knowledge in one place if anything, so I can find it somewhere when I need it again.  Ill start with the most simple scenario and then build on top of it in future blog posts. If youre an ITPro, please resist the urge to laugh at how trivial this is. Internal Hyper-V Networks Lets start simple.  An internal network is one that intended only for the virtual machines that are going to be on that network it enables them to communicate with each other. Create an Internal Network On your host machine, fire up the Hyper-V Manager and click the Virtual Network Manager in the Actions panel. Select Internal and leave all the other default values. Give the virtual network a name, and leave all the other default values. After the virtual network is created, open the Network and Sharing Center and click Change Adapter Settings to see the list of network connections. The only thing I recommend that you do is to give this connection a friendly label, e.g. Hyper-V Internal.  When you have multiple networks and virtual networks on the host machines, this helps group the networks so you can easily differentiate them from each other.  Otherwise, dont touch it, only bad things can happen. Connect the Virtual Machines to the Internal Network Im assuming that you have more than 1 virtual machine already configured in Hyper-V, for example a Domain Controller, and Exchange Server, and a SharePoint Server. What you need to do is basically plug in the network to the virtual machine.  In order to do this, the machine needs to have a virtual network adapter.  If the VM doesnt have a network adapter, open the VMs Settings and click Add Hardware in the left pane.  Choose the virtual network to which to bind the adapter to. If you already have a virtual network adapter on the VM, simply connect it to the virtual network. Assign IP Addresses to the Virtual Machines on the Internal Network Open the Network and Sharing Center on your VM, there should only be 1 network at this time.  Open the Properties of the connection, select Internet Protocol Version 4 (TCP/IPv4) and hit Properties. In this environment, Im assigning IP addresses as 192.168.0.xxx.  This particular VM has an IP address of 192.168.0.40 with a subnet mask of 255.255.255.0, and a DNS Server of 192.168.0.18.  DNS is running on the Domain Controller VM which has an IP address of 192.168.0.18. Repeat this process on every VM in your environment, obviously assigning a unique IP address to each.  In an environment with a domain controller, you should now be able to ping the machines from each other. What Next? After completing this process, heres what you still cannot do: Access the internet from any of the VMs Remote desktop to a VM from the host Remote desktop to a VM over the network In the next post, well take a look configuring an External network adapter on the virtual machines.  Well then build on top of that so that you can RDP into the VMs from the host machine and over the network.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • WPF TreeView MouseDown

    - by imekon
    I've got something like this in a TreeView: <DataTemplate x:Key="myTemplate"> <StackPanel MouseDown="OnItemMouseDown"> ... </StackPanel> </DataTemplate> Using this I get the mouse down events if I click on items in the stack panel. However... there seems to be another item behind the stack panel that is the TreeViewItem - it's very hard to hit, but not impossible, and that's when the problems start to occur. I had a go at handling PreviewMouseDown on TreeViewItem, however that seems to require e.Handled = false otherwise standard tree view behaviour stops working. Ok, Here's the source code... MainWindow.xaml <Window x:Class="WPFMultiSelectTree.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WPFMultiSelectTree" Title="Multiple Selection Tree" Height="300" Width="300"> <Window.Resources> <!-- Declare the classes that convert bool to Visibility --> <local:VisibilityConverter x:Key="visibilityConverter"/> <local:VisibilityInverter x:Key="visibilityInverter"/> <!-- Set the style for any tree view item --> <Style TargetType="TreeViewItem"> <Style.Triggers> <DataTrigger Binding="{Binding Selected}" Value="True"> <Setter Property="Background" Value="DarkBlue"/> <Setter Property="Foreground" Value="White"/> </DataTrigger> </Style.Triggers> <EventSetter Event="PreviewMouseDown" Handler="OnTreePreviewMouseDown"/> </Style> <!-- Declare a hierarchical data template for the tree view items --> <HierarchicalDataTemplate x:Key="RecursiveTemplate" ItemsSource="{Binding Children}"> <StackPanel Margin="2" Orientation="Horizontal" MouseDown="OnTreeMouseDown"> <Ellipse Width="12" Height="12" Fill="Green"/> <TextBlock Margin="2" Text="{Binding Name}" Visibility="{Binding Editing, Converter={StaticResource visibilityInverter}}"/> <TextBox Margin="2" Text="{Binding Name}" KeyDown="OnTextBoxKeyDown" IsVisibleChanged="OnTextBoxIsVisibleChanged" Visibility="{Binding Editing, Converter={StaticResource visibilityConverter}}"/> <TextBlock Margin="2" Text="{Binding Index, StringFormat=({0})}"/> </StackPanel> </HierarchicalDataTemplate> <!-- Declare a simple template for a list box --> <DataTemplate x:Key="ListTemplate"> <TextBlock Text="{Binding Name}"/> </DataTemplate> </Window.Resources> <Grid> <!-- Declare the rows in this grid --> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition/> <RowDefinition Height="Auto"/> <RowDefinition/> </Grid.RowDefinitions> <!-- The first header --> <TextBlock Grid.Row="0" Margin="5" Background="PowderBlue">Multiple selection tree view</TextBlock> <!-- The tree view --> <TreeView Name="m_tree" Margin="2" Grid.Row="1" ItemsSource="{Binding Children}" ItemTemplate="{StaticResource RecursiveTemplate}"/> <!-- The second header --> <TextBlock Grid.Row="2" Margin="5" Background="PowderBlue">The currently selected items in the tree</TextBlock> <!-- The list box --> <ListBox Name="m_list" Margin="2" Grid.Row="3" ItemsSource="{Binding .}" ItemTemplate="{StaticResource ListTemplate}"/> </Grid> </Window> MainWindow.xaml.cs /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { private Container m_root; private Container m_first; private ObservableCollection<Container> m_selection; private string m_current; /// <summary> /// Constructor /// </summary> public MainWindow() { InitializeComponent(); m_selection = new ObservableCollection<Container>(); m_root = new Container("root"); for (int parents = 0; parents < 50; parents++) { Container parent = new Container(String.Format("parent{0}", parents + 1)); for (int children = 0; children < 1000; children++) { parent.Add(new Container(String.Format("child{0}", children + 1))); } m_root.Add(parent); } m_tree.DataContext = m_root; m_list.DataContext = m_selection; m_first = null; } /// <summary> /// Has the shift key been pressed? /// </summary> private bool ShiftPressed { get { return Keyboard.IsKeyDown(Key.LeftShift) || Keyboard.IsKeyDown(Key.RightShift); } } /// <summary> /// Has the control key been pressed? /// </summary> private bool CtrlPressed { get { return Keyboard.IsKeyDown(Key.LeftCtrl) || Keyboard.IsKeyDown(Key.RightCtrl); } } /// <summary> /// Clear down the selection list /// </summary> private void DeselectAndClear() { foreach(Container container in m_selection) { container.Selected = false; } m_selection.Clear(); } /// <summary> /// Add the container to the list (if not already present), /// mark as selected /// </summary> /// <param name="container"></param> private void AddToSelection(Container container) { if (container == null) { return; } foreach (Container child in m_selection) { if (child == container) { return; } } container.Selected = true; m_selection.Add(container); } /// <summary> /// Remove container from list, mark as not selected /// </summary> /// <param name="container"></param> private void RemoveFromSelection(Container container) { m_selection.Remove(container); container.Selected = false; } /// <summary> /// Process single click on a tree item /// /// Normally just select an item /// /// SHIFT-Click extends selection /// CTRL-Click toggles a selection /// </summary> /// <param name="sender"></param> private void OnTreeSingleClick(object sender) { FrameworkElement element = sender as FrameworkElement; if (element != null) { Container container = element.DataContext as Container; if (container != null) { if (CtrlPressed) { if (container.Selected) { RemoveFromSelection(container); } else { AddToSelection(container); } } else if (ShiftPressed) { if (container.Parent == m_first.Parent) { if (container.Index < m_first.Index) { Container item = container; for (int i = container.Index; i < m_first.Index; i++) { AddToSelection(item); item = item.Next; if (item == null) { break; } } } else if (container.Index > m_first.Index) { Container item = m_first; for (int i = m_first.Index; i <= container.Index; i++) { AddToSelection(item); item = item.Next; if (item == null) { break; } } } } } else { DeselectAndClear(); m_first = container; AddToSelection(container); } } } } /// <summary> /// Process double click on tree item /// </summary> /// <param name="sender"></param> private void OnTreeDoubleClick(object sender) { FrameworkElement element = sender as FrameworkElement; if (element != null) { Container container = element.DataContext as Container; if (container != null) { container.Editing = true; m_current = container.Name; } } } /// <summary> /// Clicked on the stack panel in the tree view /// /// Double left click: /// /// Switch to editing mode (flips visibility of textblock and textbox) /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTreeMouseDown(object sender, MouseButtonEventArgs e) { Debug.WriteLine("StackPanel mouse down"); switch(e.ChangedButton) { case MouseButton.Left: switch (e.ClickCount) { case 2: OnTreeDoubleClick(sender); e.Handled = true; break; } break; } } /// <summary> /// Clicked on tree view item in tree /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTreePreviewMouseDown(object sender, MouseButtonEventArgs e) { Debug.WriteLine("TreeViewItem preview mouse down"); switch (e.ChangedButton) { case MouseButton.Left: switch (e.ClickCount) { case 1: { // We've had a single click on a tree view item // Unfortunately this is the WHOLE tree item, including the +/- // symbol to the left. The tree doesn't do a selection, so we // have to filter this out... MouseDevice device = e.Device as MouseDevice; Debug.WriteLine(String.Format("Tree item clicked on: {0}", device.DirectlyOver.GetType().ToString())); // This is bad. The whole point of WPF is for the code // not to know what the UI has - yet here we are testing for // it as a workaround. Sigh... if (device.DirectlyOver.GetType() != typeof(Path)) { OnTreeSingleClick(sender); } // Cannot say handled - if we do it stops the tree working! //e.Handled = true; } break; } break; } } /// <summary> /// Key press in text box /// /// Return key finishes editing /// Escape key finishes editing, restores original value (this doesn't work!) /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTextBoxKeyDown(object sender, KeyEventArgs e) { switch(e.Key) { case Key.Return: { TextBox box = sender as TextBox; if (box != null) { Container container = box.DataContext as Container; if (container != null) { container.Editing = false; e.Handled = true; } } } break; case Key.Escape: { TextBox box = sender as TextBox; if (box != null) { Container container = box.DataContext as Container; if (container != null) { container.Editing = false; container.Name = m_current; e.Handled = true; } } } break; } } /// <summary> /// When text box becomes visible, grab focus and select all text in it. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTextBoxIsVisibleChanged(object sender, DependencyPropertyChangedEventArgs e) { bool visible = (bool)e.NewValue; if (visible) { TextBox box = sender as TextBox; if (box != null) { box.Focus(); box.SelectAll(); } } } } Here's the Container class public class Container : INotifyPropertyChanged { private string m_name; private ObservableCollection<Container> m_children; private Container m_parent; private bool m_selected; private bool m_editing; /// <summary> /// Constructor /// </summary> /// <param name="name">name of object</param> public Container(string name) { m_name = name; m_children = new ObservableCollection<Container>(); m_parent = null; m_selected = false; m_editing = false; } /// <summary> /// Name of object /// </summary> public string Name { get { return m_name; } set { if (m_name != value) { m_name = value; OnPropertyChanged("Name"); } } } /// <summary> /// Index of object in parent's children /// /// If there's no parent, the index is -1 /// </summary> public int Index { get { if (m_parent != null) { return m_parent.Children.IndexOf(this); } return -1; } } /// <summary> /// Get the next item, assuming this is parented /// /// Returns null if end of list reached, or no parent /// </summary> public Container Next { get { if (m_parent != null) { int index = Index + 1; if (index < m_parent.Children.Count) { return m_parent.Children[index]; } } return null; } } /// <summary> /// List of children /// </summary> public ObservableCollection<Container> Children { get { return m_children; } } /// <summary> /// Selected status /// </summary> public bool Selected { get { return m_selected; } set { if (m_selected != value) { m_selected = value; OnPropertyChanged("Selected"); } } } /// <summary> /// Editing status /// </summary> public bool Editing { get { return m_editing; } set { if (m_editing != value) { m_editing = value; OnPropertyChanged("Editing"); } } } /// <summary> /// Parent of this object /// </summary> public Container Parent { get { return m_parent; } set { m_parent = value; } } /// <summary> /// WPF Property Changed event /// </summary> public event PropertyChangedEventHandler PropertyChanged; /// <summary> /// Handler to inform WPF that a property has changed /// </summary> /// <param name="name"></param> private void OnPropertyChanged(string name) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(name)); } } /// <summary> /// Add a child to this container /// </summary> /// <param name="child"></param> public void Add(Container child) { m_children.Add(child); child.m_parent = this; } /// <summary> /// Remove a child from this container /// </summary> /// <param name="child"></param> public void Remove(Container child) { m_children.Remove(child); child.m_parent = null; } } The two classes VisibilityConverter and VisibilityInverter are implementations of IValueConverter that translates bool to Visibility. They make sure the TextBlock is displayed when not editing, and the TextBox is displayed when editing.

    Read the article

  • Passing data between android ListActivities in Java

    - by Will Janes
    I am new to Android! I am having a problem getting this code to work... Basically I Go from one list activity to another and pass the text from a list item through the intent of the activity to the new list view, then retrieve that text in the new list activity and then preform a http request based on value of that list item. Log Cat 04-05 17:47:32.370: E/AndroidRuntime(30135): FATAL EXCEPTION: main 04-05 17:47:32.370: E/AndroidRuntime(30135): java.lang.ClassCastException:android.widget.LinearLayout 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.thickcrustdesigns.ufood.CatogPage$1.onItemClick(CatogPage.java:66) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AdapterView.performItemClick(AdapterView.java:284) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.ListView.performItemClick(ListView.java:3731) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AbsListView$PerformClick.run(AbsListView.java:1959) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.handleCallback(Handler.java:587) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.dispatchMessage(Handler.java:92) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Looper.loop(Looper.java:130) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.app.ActivityThread.main(ActivityThread.java:3691) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invokeNative(Native Method) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invoke(Method.java:507) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 04-05 17:47:32.370: E/AndroidRuntime(30135): at dalvik.system.NativeStart.main(Native Method) ListActivity 1 package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.Button; import android.widget.ListView; import android.widget.TextView; public class CatogPage extends ListActivity { ListView listView1; Button btn_bk; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.definition_main); btn_bk = (Button) findViewById(R.id.btn_bk); listView1 = (ListView) findViewById(android.R.id.list); ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "categories")); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); ListView lv = getListView(); lv.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { TextView tv = (TextView) view; String p = tv.getText().toString(); Intent i = new Intent(getApplicationContext(), Results.class); i.putExtra("category", p); startActivity(i); } }); btn_bk.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { Intent i = new Intent(getApplicationContext(), UFoodAppActivity.class); startActivity(i); } }); } } **ListActivity 2** package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Request package com.thickcrustdesigns.ufood; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONArray; import org.json.JSONObject; import android.content.Context; import android.util.Log; import android.widget.Toast; public class Request { @SuppressWarnings("null") public static ArrayList<JSONObject> fetchData(Context context, ArrayList<NameValuePair> nvp) { ArrayList<JSONObject> listItems = new ArrayList<JSONObject>(); InputStream is = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost( "http://co350-11d.projects02.glos.ac.uk/php/database.php"); httppost.setEntity(new UrlEncodedFormEntity(nvp)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch (Exception e) { Log.e("log_tag", "Error in http connection" + e.toString()); } // convert response to string String result = ""; try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); InputStream stream = null; StringBuilder sb = null; while ((result = reader.readLine()) != null) { sb.append(result + "\n"); } stream.close(); result = sb.toString(); } catch (Exception e) { Log.e("log_tag", "Error converting result " + e.toString()); } try { JSONArray jArray = new JSONArray(result); for (int i = 0; i < jArray.length(); i++) { JSONObject jo = jArray.getJSONObject(i); listItems.add(jo); } } catch (Exception e) { Toast.makeText(context.getApplicationContext(), "None Found!", Toast.LENGTH_LONG).show(); } return listItems; } } Any help would be grateful! Many Thanks EDIT Sorry very tired so missed out my 2nd ListActivity package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Sorry again! Cheers guys!

    Read the article

  • how to generate tinymce to ajax generated textarea

    - by Jai_pans
    Hi, i have a image multi-uloader script which also each item uploaded was preview 1st b4 it submitted and each images has its following textarea which are also generated by javascript and my problem is i want to use the tinymce editor to each textarea generated by the ajax. Any help will be appreciated.. here is my script function fileQueueError(file, errorCode, message) { try { var imageName = "error.gif"; var errorName = ""; if (errorCode === SWFUpload.errorCode_QUEUE_LIMIT_EXCEEDED) { errorName = "You have attempted to queue too many files."; } if (errorName !== "") { alert(errorName); return; } switch (errorCode) { case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: imageName = "zerobyte.gif"; break; case SWFUpload.QUEUE_ERROR.FILE_EXCEEDS_SIZE_LIMIT: imageName = "toobig.gif"; break; case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: case SWFUpload.QUEUE_ERROR.INVALID_FILETYPE: default: alert(message); break; } addImage("images/" + imageName); } catch (ex) { this.debug(ex); } } function fileDialogComplete(numFilesSelected, numFilesQueued) { try { if (numFilesQueued 0) { this.startUpload(); } } catch (ex) { this.debug(ex); } } function uploadProgress(file, bytesLoaded) { try { var percent = Math.ceil((bytesLoaded / file.size) * 100); var progress = new FileProgress(file, this.customSettings.upload_target); progress.setProgress(percent); if (percent === 100) { progress.setStatus("Creating thumbnail..."); progress.toggleCancel(false, this); } else { progress.setStatus("Uploading..."); progress.toggleCancel(true, this); } } catch (ex) { this.debug(ex); } } function uploadSuccess(file, serverData) { try { var progress = new FileProgress(file, this.customSettings.upload_target); if (serverData.substring(0, 7) === "FILEID:") { addRow("tableID","thumbnail.php?id=" + serverData.substring(7),file.name); //setup(); //generateTinyMCE('itemdescription[]'); progress.setStatus("Thumbnail Created."); progress.toggleCancel(false); } else { addImage("images/error.gif"); progress.setStatus("Error."); progress.toggleCancel(false); alert(serverData); } } catch (ex) { this.debug(ex); } } function uploadComplete(file) { try { /* I want the next upload to continue automatically so I'll call startUpload here */ if (this.getStats().files_queued 0) { this.startUpload(); } else { var progress = new FileProgress(file, this.customSettings.upload_target); progress.setComplete(); progress.setStatus("All images received."); progress.toggleCancel(false); } } catch (ex) { this.debug(ex); } } function uploadError(file, errorCode, message) { var imageName = "error.gif"; var progress; try { switch (errorCode) { case SWFUpload.UPLOAD_ERROR.FILE_CANCELLED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Cancelled"); progress.toggleCancel(false); } catch (ex1) { this.debug(ex1); } break; case SWFUpload.UPLOAD_ERROR.UPLOAD_STOPPED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Stopped"); progress.toggleCancel(true); } catch (ex2) { this.debug(ex2); } case SWFUpload.UPLOAD_ERROR.UPLOAD_LIMIT_EXCEEDED: imageName = "uploadlimit.gif"; break; default: alert(message); break; } addImage("images/" + imageName); } catch (ex3) { this.debug(ex3); } } function addRow(tableID,src,filename) { var table = document.getElementById(tableID); var rowCount = table.rows.length; var row = table.insertRow(rowCount); rowCount + 1; row.id = "row"+rowCount; var cell0 = row.insertCell(0); cell0.innerHTML = rowCount; cell0.style.background = "#FFFFFF"; var cell1 = row.insertCell(1); cell1.align = "center"; cell1.style.background = "#FFFFFF"; var imahe = document.createElement("img"); imahe.setAttribute("src",src); var hidden = document.createElement("input"); hidden.setAttribute("type","hidden"); hidden.setAttribute("name","filename[]"); hidden.setAttribute("value",filename); /*var hidden2 = document.createElement("input"); hidden2.setAttribute("type","hidden"); hidden2.setAttribute("name","filename[]"); hidden2.setAttribute("value",filename); cell1.appendChild(hidden2);*/ cell1.appendChild(hidden); cell1.appendChild(imahe); var cell2 = row.insertCell(2); cell2.align = "left"; cell2.valign = "top"; cell2.style.background = "#FFFFFF"; //tr1.appendChild(td1); var div2 = document.createElement("div"); div2.style.padding ="0 0 0 10px"; div2.style.width = "400px"; var alink = document.createElement("a"); //alink.style.margin="40px 0 0 0"; alink.href ="#"; alink.innerHTML ="Cancel"; alink.onclick= function () { document.getElementById(row.id).style.display='none'; document.getElementById(textfield.id).disabled='disabled'; }; var div = document.createElement("div"); div.style.margin="10px 0"; div.appendChild(alink); var textfield = document.createElement("input"); textfield.id = "file"+rowCount; textfield.type = "text"; textfield.name = "itemname[]"; textfield.style.margin = "10px 0"; textfield.style.width = "400px"; textfield.value = "Item Name"; textfield.onclick= function(){ //textfield.value=""; if(textfield.value=="Item Name") textfield.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(price.value=="") price.value="Item Price"; } var desc = document.createElement("textarea"); desc.name = "itemdescription[]"; desc.cols = "80"; desc.rows = "4"; desc.innerHTML = "Item Description"; desc.onclick = function(){ if(desc.innerHTML== "Item Description") desc.innerHTML = ""; if(textfield.value=="Item name" || textfield.value=="") textfield.value="Item Name"; if(price.value=="") price.value="Item Price"; } var price = document.createElement("input"); price.id = "file"+rowCount; price.type = "text"; price.name = "itemprice[]"; price.style.margin = "10px 0"; price.style.width = "400px"; price.value = "Item Price"; price.onclick= function(){ if(price.value=="Item Price") price.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(textfield.value=="") textfield.value="Item Name"; } var span = document.createElement("span"); span.innerHTML = "View"; span.style.width = "auto"; span.style.padding = "10px 0"; var view = document.createElement("input"); view.id = "file"+rowCount; view.type = "checkbox"; view.name = "publicview[]"; view.value = "y"; view.checked = "checked"; var div3 = document.createElement("div"); div3.appendChild(span); div3.appendChild(view); var div4 = document.createElement("div"); div4.style.padding = "10px 0"; var span2 = document.createElement("span"); span2.innerHTML = "Default Display"; span2.style.width = "auto"; span2.style.padding = "10px 0"; var radio = document.createElement("input"); radio.type = "radio"; radio.name = "setdefault"; radio.value = "y"; div4.appendChild(span2); div4.appendChild(radio); div2.appendChild(div); //div2.appendChild(label); //div2.appendChild(table); div2.appendChild(textfield); div2.appendChild(desc); div2.appendChild(price); div2.appendChild(div3); div2.appendChild(div4); cell2.appendChild(div2); } function addImage(src,val_id) { var newImg = document.createElement("img"); newImg.style.margin = "5px 50px 5px 5px"; newImg.style.display= "inline"; newImg.id=val_id; document.getElementById("thumbnails").appendChild(newImg); if (newImg.filters) { try { newImg.filters.item("DXImageTransform.Microsoft.Alpha").opacity = 0; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. newImg.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + 0 + ')'; } } else { newImg.style.opacity = 0; } newImg.onload = function () { fadeIn(newImg, 0); }; newImg.src = src; } function fadeIn(element, opacity) { var reduceOpacityBy = 5; var rate = 30; // 15 fps if (opacity < 100) { opacity += reduceOpacityBy; if (opacity > 100) { opacity = 100; } if (element.filters) { try { element.filters.item("DXImageTransform.Microsoft.Alpha").opacity = opacity; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. element.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + opacity + ')'; } } else { element.style.opacity = opacity / 100; } } if (opacity < 100) { setTimeout(function () { fadeIn(element, opacity); }, rate); } } /* ************************************** * FileProgress Object * Control object for displaying file info * ************************************** */ function FileProgress(file, targetID) { this.fileProgressID = "divFileProgress"; this.fileProgressWrapper = document.getElementById(this.fileProgressID); if (!this.fileProgressWrapper) { this.fileProgressWrapper = document.createElement("div"); this.fileProgressWrapper.className = "progressWrapper"; this.fileProgressWrapper.id = this.fileProgressID; this.fileProgressElement = document.createElement("div"); this.fileProgressElement.className = "progressContainer"; var progressCancel = document.createElement("a"); progressCancel.className = "progressCancel"; progressCancel.href = "#"; progressCancel.style.visibility = "hidden"; progressCancel.appendChild(document.createTextNode(" ")); var progressText = document.createElement("div"); progressText.className = "progressName"; progressText.appendChild(document.createTextNode(file.name)); var progressBar = document.createElement("div"); progressBar.className = "progressBarInProgress"; var progressStatus = document.createElement("div"); progressStatus.className = "progressBarStatus"; progressStatus.innerHTML = "&nbsp;"; this.fileProgressElement.appendChild(progressCancel); this.fileProgressElement.appendChild(progressText); this.fileProgressElement.appendChild(progressStatus); this.fileProgressElement.appendChild(progressBar); this.fileProgressWrapper.appendChild(this.fileProgressElement); document.getElementById(targetID).appendChild(this.fileProgressWrapper); fadeIn(this.fileProgressWrapper, 0); } else { this.fileProgressElement = this.fileProgressWrapper.firstChild; this.fileProgressElement.childNodes[1].firstChild.nodeValue = file.name; } this.height = this.fileProgressWrapper.offsetHeight; } FileProgress.prototype.setProgress = function (percentage) { this.fileProgressElement.className = "progressContainer green"; this.fileProgressElement.childNodes[3].className = "progressBarInProgress"; this.fileProgressElement.childNodes[3].style.width = percentage + "%"; }; FileProgress.prototype.setComplete = function () { this.fileProgressElement.className = "progressContainer blue"; this.fileProgressElement.childNodes[3].className = "progressBarComplete"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setError = function () { this.fileProgressElement.className = "progressContainer red"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setCancelled = function () { this.fileProgressElement.className = "progressContainer"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setStatus = function (status) { this.fileProgressElement.childNodes[2].innerHTML = status; }; FileProgress.prototype.toggleCancel = function (show, swfuploadInstance) { this.fileProgressElement.childNodes[0].style.visibility = show ? "visible" : "hidden"; if (swfuploadInstance) { var fileID = this.fileProgressID; this.fileProgressElement.childNodes[0].onclick = function () { swfuploadInstance.cancelUpload(fileID); return false; }; } }; i am using a swfuploader an i jst added a input fields and a textarea when it preview the images which ready to be uploaded and from my html i have this script var swfu; window.onload = function () { swfu = new SWFUpload({ // Backend Settings upload_url: "../we_modules/upload.php", // Relative to the SWF file or absolute post_params: {"PHPSESSID": ""}, // File Upload Settings file_size_limit : "20 MB", // 2MB file_types : "*.*", //file_types : "", file_types_description : "jpg", file_upload_limit : "0", file_queue_limit : "0", // Event Handler Settings - these functions as defined in Handlers.js // The handlers are not part of SWFUpload but are part of my website and control how // my website reacts to the SWFUpload events. //file_queued_handler : fileQueued, file_queue_error_handler : fileQueueError, file_dialog_complete_handler : fileDialogComplete, upload_progress_handler : uploadProgress, upload_error_handler : uploadError, upload_success_handler : uploadSuccess, upload_complete_handler : uploadComplete, // Button Settings button_image_url : "../we_modules/images/SmallSpyGlassWithTransperancy_17x18.png", // Relative to the SWF file button_placeholder_id : "spanButtonPlaceholder", button_width: 180, button_height: 18, button_text : 'Select Files(2 MB Max)', button_text_style : '.button { font-family: Helvetica, Arial, sans-serif; font-size: 12pt;cursor:pointer } .buttonSmall { font-size: 10pt; }', button_text_top_padding: 0, button_text_left_padding: 18, button_window_mode: SWFUpload.WINDOW_MODE.TRANSPARENT, button_cursor: SWFUpload.CURSOR.HAND, // Flash Settings flash_url : "../swfupload/swfupload.swf", custom_settings : { upload_target : "divFileProgressContainer" }, // Debug Settings debug: false }); }; where should i put on the tinymce function as you mention below?

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering a list using ModelMetadata

    - by rajbk
    This post looks at how to control paging, sorting and filtering when displaying a list of data by specifying attributes in your Model using the ASP.NET MVC framework and the excellent MVCContrib library. It also shows how to hide/show columns and control the formatting of data using attributes.  This uses the Northwind database. A sample project is attached at the end of this post. Let’s start by looking at a class called ProductViewModel. The properties in the class are decorated with attributes. The OrderBy attribute tells the system that the Model can be sorted using that property. The SearchFilter attribute tells the system that filtering is allowed on that property. Filtering type is set by the  FilterType enum which currently supports Equals and Contains. The ScaffoldColumn property specifies if a column is hidden or not The DisplayFormat specifies how the data is formatted. public class ProductViewModel { [OrderBy(IsDefault = true)] [ScaffoldColumn(false)] public int? ProductID { get; set; }   [SearchFilter(FilterType.Contains)] [OrderBy] [DisplayName("Product Name")] public string ProductName { get; set; }   [OrderBy] [DisplayName("Unit Price")] [DisplayFormat(DataFormatString = "{0:c}")] public System.Nullable<decimal> UnitPrice { get; set; }   [DisplayName("Category Name")] public string CategoryName { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? CategoryID { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? SupplierID { get; set; }   [OrderBy] public bool Discontinued { get; set; } } Before we explore the code further, lets look at the UI.  The UI has a section for filtering the data. The column headers with links are sortable. Paging is also supported with the help of a pager row. The pager is rendered using the MVCContrib Pager component. The data is displayed using a customized version of the MVCContrib Grid component. The customization was done in order for the Grid to be aware of the attributes mentioned above. Now, let’s look at what happens when we perform actions on this page. The diagram below shows the process: The form on the page has its method set to “GET” therefore we see all the parameters in the query string. The query string is shown in blue above. This query gets routed to an action called Index with parameters of type ProductViewModel and PageSortOptions. The parameters in the query string get mapped to the input parameters using model binding. The ProductView object created has the information needed to filter data while the PageAndSorting object is used for paging and sorting the data. The last block in the figure above shows how the filtered and paged list is created. We receive a product list from our product repository (which is of type IQueryable) and first filter it by calliing the AsFiltered extension method passing in the productFilters object and then call the AsPagination extension method passing in the pageSort object. The AsFiltered extension method looks at the type of the filter instance passed in. It skips properties in the instance that do not have the SearchFilter attribute. For properties that have the SearchFilter attribute, it adds filter expression trees to filter against the IQueryable data. The AsPagination extension method looks at the type of the IQueryable and ensures that the column being sorted on has the OrderBy attribute. If it does not find one, it looks for the default sort field [OrderBy(IsDefault = true)]. It is required that at least one attribute in your model has the [OrderBy(IsDefault = true)]. This because a person could be performing paging without specifying an order by column. As you may recall the LINQ Skip method now requires that you call an OrderBy method before it. Therefore we need a default order by column to perform paging. The extension method adds a order expressoin tree to the IQueryable and calls the MVCContrib AsPagination extension method to page the data. Implementation Notes Auto Postback The search filter region auto performs a get request anytime the dropdown selection is changed. This is implemented using the following jQuery snippet $(document).ready(function () { $("#productSearch").change(function () { this.submit(); }); }); Strongly Typed View The code used in the Action method is shown below: public ActionResult Index(ProductViewModel productFilters, PageSortOptions pageSortOptions) { var productPagedList = productRepository.GetProductsProjected().AsFiltered(productFilters).AsPagination(pageSortOptions);   var productViewFilterContainer = new ProductViewFilterContainer(); productViewFilterContainer.Fill(productFilters.CategoryID, productFilters.SupplierID, productFilters.ProductName);   var gridSortOptions = new GridSortOptions { Column = pageSortOptions.Column, Direction = pageSortOptions.Direction };   var productListContainer = new ProductListContainerModel { ProductPagedList = productPagedList, ProductViewFilterContainer = productViewFilterContainer, GridSortOptions = gridSortOptions };   return View(productListContainer); } As you see above, the object that is returned to the view is of type ProductListContainerModel. This contains all the information need for the view to render the Search filter section (including dropdowns),  the Html.Pager (MVCContrib) and the Html.Grid (from MVCContrib). It also stores the state of the search filters so that they can recreate themselves when the page reloads (Viewstate, I miss you! :0)  The class diagram for the container class is shown below.   Custom MVCContrib Grid The MVCContrib grid default behavior was overridden so that it would auto generate the columns and format the columns based on the metadata and also make it aware of our custom attributes (see MetaDataGridModel in the sample code). The Grid ensures that the ShowForDisplay on the column is set to true This can also be set by the ScaffoldColumn attribute ref: http://bradwilson.typepad.com/blog/2009/10/aspnet-mvc-2-templates-part-2-modelmetadata.html) Column headers are set using the DisplayName attribute Column sorting is set using the OrderBy attribute. The data is formatted using the DisplayFormat attribute. Generic Extension methods for Sorting and Filtering The extension method AsFiltered takes in an IQueryable<T> and uses expression trees to query against the IQueryable data. The query is constructed using the Model metadata and the properties of the T filter (productFilters in our case). Properties in the Model that do not have the SearchFilter attribute are skipped when creating the filter expression tree.  It returns an IQueryable<T>. The extension method AsPagination takes in an IQuerable<T> and first ensures that the column being sorted on has the OrderBy attribute. If not, we look for the default OrderBy column ([OrderBy(IsDefault = true)]). We then build an expression tree to sort on this column. We finally hand off the call to the MVCContrib AsPagination which returns an IPagination<T>. This type as you can see in the class diagram above is passed to the view and used by the MVCContrib Grid and Pager components. Custom Provider To get the system to recognize our custom attributes, we create our MetadataProvider as mentioned in this article (http://bradwilson.typepad.com/blog/2010/01/why-you-dont-need-modelmetadataattributes.html) protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { ModelMetadata metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);   SearchFilterAttribute searchFilterAttribute = attributes.OfType<SearchFilterAttribute>().FirstOrDefault(); if (searchFilterAttribute != null) { metadata.AdditionalValues.Add(Globals.SearchFilterAttributeKey, searchFilterAttribute); }   OrderByAttribute orderByAttribute = attributes.OfType<OrderByAttribute>().FirstOrDefault(); if (orderByAttribute != null) { metadata.AdditionalValues.Add(Globals.OrderByAttributeKey, orderByAttribute); }   return metadata; } We register our MetadataProvider in Global.asax.cs. protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   ModelMetadataProviders.Current = new MvcFlan.QueryModelMetaDataProvider(); } Bugs, Comments and Suggestions are welcome! You can download the sample code below. This code is purely experimental. Use at your own risk. Download Sample Code (VS 2010 RTM) MVCNorthwindSales.zip

    Read the article

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • Download the ‘Getting Started with Ubuntu 12.10' Manual for Free

    - by Asian Angel
    Today is the official release date for Ubuntu’s latest version, so why not download the manual to go with it? This free manual is available to view online or download as a 145 page PDF file to best suits your needs. The home page for the manual will display a large Download Button, but the best option is to click on the Alternative Download Options link. Clicking on the Alternative Download Options link will let you select the language version you want, choose a system version, and let you download the manual directly or view it online. What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It?

    Read the article

  • Kubuntu desktop icons disappearing on reboot

    - by 3mpty
    I recently installed Kubuntu 14.04. I have the desktop layout set as Folder View and everything is fine, but sometimes when I reboot it shows this: So all I can see is a scrollbar and no icons. If I go to Desktop Settings and switch to another layout and then switch again to Folder View everything is fine and the icons (and the "normal" desktop) reappear. This happens often when I reboot but not every time. Any ideas? Thank you very much.

    Read the article

  • “File does not exist” in apache error log when mod_rewrite is using

    - by Nithin
    I am getting below error in server log, when re-writing the urls. [Fri Jan 25 11:32:57 2013] [error] [client ***IP***] File does not exist: /home/testserver/public_html/testing/flats-in-delhi-for-sale, referer: http://domain.in/testing/flats-in-delhi-for-sale/ I searched very where, but not found any solution ! My .htaccess config is given below: Options +FollowSymLinks Options All -Indexes ErrorDocument 404 http://domain.in/testing/404.php RewriteEngine On #Category Link RewriteRule ^([a-zA-Z]+)-in-([a-zA-Z]+)-([a-zA-Z-]+)/?$ view-category.php?type=$1&dis=$2&cat=$3 [NC,L] #Single Property Link RewriteRule ^([a-zA-Z]+)-in-([a-zA-Z]+)-([a-zA-Z-]+)/([a-zA-Z0-9-]+)/?$ view-property.php?type=$1&district=$2&category=$3&title_alias=$4 [NC,L] I also found similar old dated question, but no answer :( (http://webmasters.stackexchange.com/questions/16606/file-does-not-exist-in-apache-error-log) Thanks in advance for your help. PS: My site is working fine even Apache log is showing the error Nithin

    Read the article

  • Pass a single boolean from an Android App to a libgdx game

    - by Doug Henning
    I'm writing an Android application that needs to pass a single boolean into an Android game that I am also writing. The idea is that the user does something in the App which will affect how the game operates. This is tricky with LIBGDX since I need to get the bool value into the Java files of the game, but of course, you can't call Android specific things from within LIBGDX's main Java files. I tried using an intent but of course the same problem persists. I can get the boolean into the MainActivity.Java of the android output of the game, but can't pass it along any further since the android output and the main java files don't know about each other. I have seen a few tutorials that explain how to use set up an interface in the LIBGDX java files that can call android things. This seems like wild overkill for what I want to do. I've been trying to use Android's Shared Preferences with LIBGDX's Gdx.app.getPreferences, but I can't make it work. Anyhelp would be MUCH appreciated. I've set up two hello world applications. One is a standard Android app, with a single button that is supposed to write "true" into the shared preferences. The other is a standard LIBGDX hello world that is supposed to do nothing but check that bool when launched and if true display one image to the screen, if false, display a different one. Here's the relevant bit of the Android code: import android.preference.PreferenceManager; public void onClick(View view) { if (view == this.boolButton){ final String PREF_FILE_NAME = "myBool"; SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_WORLD_WRITEABLE); SharedPreferences.Editor editor = preferences.edit(); editor.putBoolean("myBool", true); editor.commit(); } } And here's the relevant bit of the code from the LIBGDX main file: Preferences prefs = Gdx.app.getPreferences("myBool"); boolean switcher = prefs.getBoolean("myBool"); if(switcher == true){ texture = new Texture(Gdx.files.internal("data/worked512.png")); prefs.putBoolean("myBool", false); } else { texture = new Texture(Gdx.files.internal("data/libgdx.png")); } Everything compiles fine, it just doesn't work. I've spent HOURS googling trying to find a way to pass this single boolean from android into a LIBGDX main and I'm totally stumped. Thanks for your help.

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • No Preview Images in File Open Dialogs on Windows 7

    Ive been updating some file uploader code in my photoalbum today and while I was working with the uploader I noticed that the File Open dialog using Silverlight that handles the file selections didnt allow me to ever see an image preview for image files. It sure would be nice if I could preview the images Im about to upload before selecting them from a list. Heres what my list looked like: This is the Medium Icon view, but regardless of the views available including Content view only icons are...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Pass a single boolean from an Android App to a LIBGDK game

    - by Doug Henning
    I'm writing an Android application that needs to pass a single boolean into an Android game that I am also writing. The idea is that the user does something in the App which will affect how the game operates. This is tricky with LIBGDX since I need to get the bool value into the Java files of the game, but of course, you can't call Android specific things from within LIBGDX's main Java files. I tried using an intent but of course the same problem persists. I can get the boolean into the MainActivity.Java of the android output of the game, but can't pass it along any further since the android output and the main java files don't know about each other. I have seen a few tutorials that explain how to use set up an interface in the LIBGDX java files that can call android things. This seems like wild overkill for what I want to do. I've been trying to use Android's Shared Preferences with LIBGDX's Gdx.app.getPreferences, but I can't make it work. Anyhelp would be MUCH appreciated. I've set up two hello world applications. One is a standard Android app, with a single button that is supposed to write "true" into the shared preferences. The other is a standard LIBGDX hello world that is supposed to do nothing but check that bool when launched and if true display one image to the screen, if false, display a different one. Here's the relevant bit of the Android code: import android.preference.PreferenceManager; public void onClick(View view) { if (view == this.boolButton){ final String PREF_FILE_NAME = "myBool"; SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_WORLD_WRITEABLE); SharedPreferences.Editor editor = preferences.edit(); editor.putBoolean("myBool", true); editor.commit(); } } And here's the relevant bit of the code from the LIBGDX main file: Preferences prefs = Gdx.app.getPreferences("myBool"); boolean switcher = prefs.getBoolean("myBool"); if(switcher == true){ texture = new Texture(Gdx.files.internal("data/worked512.png")); prefs.putBoolean("myBool", false); } else { texture = new Texture(Gdx.files.internal("data/libgdx.png")); } Everything compiles fine, it just doesn't work. I've spent HOURS googling trying to find a way to pass this single boolean from android into a LIBGDX main and I'm totally stumped. Thanks for your help.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >