Search Results

Search found 8563 results on 343 pages for 'routed events'.

Page 156/343 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • ASP.NET MVC 3 Hosting :: MVC 2 Strongly Typed HTML Helper and Enhanced Validation Sample

    - by mbridge
    In lue of the off the official release of ASP.NET MVC 2 RTM, I decided I would put together a quick sample of the enhanced HTML.Helpers and validation controls. I am going to use my sample event site where I will have a form so a user can search for information about a certain events. So when the Search page loads the Search action is fired return my strongly typed model. to the view.    1: [HttpGet]    2: public ViewResult Search(): public ViewResult Search()    3: {    4:     IList<EventsModel> result = _eventsService.GetEventList();    5:     var viewModel = new EventSearchModel    6:                         {    7:                             EventList = new SelectList(result, "EventCode","EventName","Select Event")    8:                         };    9:     return View(viewModel);  10: } Nothing special here, although I did want to show how to load up a strongly typed drop down list because that hung me up for a little bit. So to that, I am going to pass back a SelectList to the view and my HTML helper should no how to load this. So lets take a look at the mark up for the view.    1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"    2: Inherits="System.Web.Mvc.ViewPage<EventsSample.Models.EventSearchModel>" %>    3:     4: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">    5:     Search    6: </asp:Content>    7:     8: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">    9:   10:     <h2>Search for Events</h2>  11:   12:     <% using (Html.BeginForm("Search","Events")) {%>  13:         <%= Html.ValidationSummary(true) %>  14:          15:         <fieldset>  16:             <legend>Fields</legend>  17:              18:             <div class="editor-label">  19:                 <%= Html.LabelFor(model => model.EventNumber) %>  20:             </div>  21:             <div class="editor-field">  22:                 <%= Html.TextBoxFor(model => model.EventNumber) %>  23:                 <%= Html.ValidationMessageFor(model => model.EventNumber) %>  24:             </div>  25:              26:             <div class="editor-label">  27:                 <%= Html.LabelFor(model => model.GuestLastName) %>  28:             </div>  29:             <div class="editor-field">  30:                 <%= Html.TextBoxFor(model => model.GuestLastName) %>  31:                 <%= Html.ValidationMessageFor(model => model.GuestLastName) %>  32:             </div>  33:              34:             <div class="editor-label">  35:                 <%= Html.LabelFor(model => model.EventName) %>  36:             </div>  37:             <div class="editor-field">  38:                 <%= Html.DropDownListFor(model => model.EventName, Model.EventList,"Select Event") %>  39:                 <%= Html.ValidationMessageFor(model => model.EventName) %>  40:             </div>  41:              42:             <p>  43:                 <input type="submit" value="Save" />  44:             </p>  45:         </fieldset>  46:   47:     <% } %>  48:   49:     <div>  50:         <%= Html.ActionLink("Back to List", "Index") %>  51:     </div>  52:   53: </asp:Content> A nice feature is the scaffolding that MVC has to generate code. I simply right clicked inside my Search() action, inside the EventsController and selected “Add View” and then I selected my strongly typed object that I wanted to pass to the view and also selected that I wanted the content type be “Edit”. With that the aspx page was completely generated, although I did have to go back in and change the textbox for the Event Names to a drop down list of the names to select from. The new feature with MVC 2 are the strongly typed HTML helpers. So now, my textboxes, drop down list, and validation helpers are all strongly typed to my model.  This features gives you the benefits of intellisense and also makes it easier to debug. “The Gu” has a great post about the feature in case you want more details. The DropDownListFor function to generate the drop down list was a little tricky for me. You first need to use a Lanbda expression to pass in the property you want the selected value assigned to in your model, and then you need to pass in the list directly from the model. Validations To validate the form, you can use the strongly type validation HTML helpers which will inspect your model and return errors if the validation fails. The definitions of these rules are set directly on the Model itself so lets take a look.    1: using System.ComponentModel.DataAnnotations;    2: using System.Web.Mvc;    3:     4: namespace EventsSample.Models    5: {    6:     public class EventSearchModel    7:     {    8:         [Required(ErrorMessage = "Please enter the event number.")]    9:         [RegularExpression(@"\w{6}",  10:             ErrorMessage = "The Event Number must be 6 letters and/or numbers.")]  11:         public string EventNumber { get; set; }  12:   13:         [Required(ErrorMessage = "Please enter the guest's last name.")]  14:         [RegularExpression(@"^[A-Za-zÀ-ÖØ-öø-ÿ1-9 '\-\.]{1,22}$",  15:             ErrorMessage = "The gueest's last name must 1 to 20 characters.")]  16:         public string GuestLastName { get; set; }  17:   18:         public string EventName { get; set; }  19:         public SelectList EventList { get; set; }  20:     }  21: } Pretty cool! Okay, the only thing left to do is perform the validation in the POST action.    1: [HttpPost]    2: public ViewResult Search(EventSearchModel eventSearchModel)    3: {    4:     if (ModelState.IsValid) return View("SearchResults");    5:     else    6:     {    7:          IList<EventsModel> result = _eventsService.GetEventList();    8:         eventSearchModel.EventList = new SelectList(result, "EVentCode","EventName");   9:   10:         return View(eventSearchModel);  11:     }  12: }  13:     } If the form entries are valid, here I am simply displaying the SearchResult, but in a real world sample I would also go out get the results first. You get the idea though. In my case, when the form is not valid, I also had to reload my SelectList with the event names before I loaded the page again. Remember this is MVC, no _VieState here :) So that’s it. Now my form is validating the data and when it fails it looks like this.

    Read the article

  • [Apache] mod_rewrite www.site.com/dir/ --> www.site.com/dir/2009/

    - by Casey
    I'm having trouble with this rewrite. I've never really used mod_rewrite before and don't have much experience with regex. Any help is appreciated! <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on #prevent nested looping RewriteCond %{ENV:REDIRECT_STATUS} ^$ #re-route incoming requests RewriteRule ^(.*)$ %{REQUEST_URI}2009/$1 [L,NE] </IfModule> This partially works, http://www.site.com/dir/ is routed to http://www.site.com/dir/2009/ but a request like http://www.site.com/dir/css/theme.css fails. I'm hoping to rewrite all requests to the parent directory into the 2009 subdirectory but I keep encountering infinite loops and server errors messages. I haven't found any useful examples out there. I figured this would be a common rewrite... Thanks in advance!

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • Post SQL 2008 R2 Launch Thurs 15th London - UK SQL Server User Group is having a Social Event @ the

    - by tonyrogerson
    The UK SQL Server User Group is organising a Social event for SQL and SQL Server professionals, the event will be held after the SQL Server 2008 R2 launch event and is a short walk from that venue. See site for more information: http://sqlserverfaq.com/events/222/Social-for-SQL-and-SQL-Server-professionals-SQL-quiz-meet-your-peers-ask-the-group-Q-A.aspx We are putting some light bites on, if you are coming then do let us know through the site. Neil Hambly who is the London UK SQL Server User Group...(read more)

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 1

    - by Simon Cooper
    Before we look at the bytes comprising the CLR-specific data inside an assembly, we first need to understand the logical format of the metadata (For this post I only be looking at simple pure-IL assemblies; mixed-mode assemblies & other things complicates things quite a bit). Metadata streams Most of the CLR-specific data inside an assembly is inside one of 5 streams, which are analogous to the sections in a PE file. The name of each section in a PE file starts with a ., and the name of each stream in the CLR metadata starts with a #. All but one of the streams are heaps, which store unstructured binary data. The predefined streams are: #~ Also called the metadata stream, this stream stores all the information on the types, methods, fields, properties and events in the assembly. Unlike the other streams, the metadata stream has predefined contents & structure. #Strings This heap is where all the namespace, type & member names are stored. It is referenced extensively from the #~ stream, as we'll be looking at later. #US Also known as the user string heap, this stream stores all the strings used in code directly. All the strings you embed in your source code end up in here. This stream is only referenced from method bodies. #GUID This heap exclusively stores GUIDs used throughout the assembly. #Blob This heap is for storing pure binary data - method signatures, generic instantiations, that sort of thing. Items inside the heaps (#Strings, #US, #GUID and #Blob) are indexed using a simple binary offset from the start of the heap. At that offset is a coded integer giving the length of that item, then the item's bytes immediately follow. The #GUID stream is slightly different, in that GUIDs are all 16 bytes long, so a length isn't required. Metadata tables The #~ stream contains all the assembly metadata. The metadata is organised into 45 tables, which are binary arrays of predefined structures containing information on various aspects of the metadata. Each entry in a table is called a row, and the rows are simply concatentated together in the file on disk. For example, each row in the TypeRef table contains: A reference to where the type is defined (most of the time, a row in the AssemblyRef table). An offset into the #Strings heap with the name of the type An offset into the #Strings heap with the namespace of the type. in that order. The important tables are (with their table number in hex): 0x2: TypeDef 0x4: FieldDef 0x6: MethodDef 0x14: EventDef 0x17: PropertyDef Contains basic information on all the types, fields, methods, events and properties defined in the assembly. 0x1: TypeRef The details of all the referenced types defined in other assemblies. 0xa: MemberRef The details of all the referenced members of types defined in other assemblies. 0x9: InterfaceImpl Links the types defined in the assembly with the interfaces that type implements. 0xc: CustomAttribute Contains information on all the attributes applied to elements in this assembly, from method parameters to the assembly itself. 0x18: MethodSemantics Links properties and events with the methods that comprise the get/set or add/remove methods of the property or method. 0x1b: TypeSpec 0x2b: MethodSpec These tables provide instantiations of generic types and methods for each usage within the assembly. There are several ways to reference a single row within a table. The simplest is to simply specify the 1-based row index (RID). The indexes are 1-based so a value of 0 can represent 'null'. In this case, which table the row index refers to is inferred from the context. If the table can't be determined from the context, then a particular row is specified using a token. This is a 4-byte value with the most significant byte specifying the table, and the other 3 specifying the 1-based RID within that table. This is generally how a metadata table row is referenced from the instruction stream in method bodies. The third way is to use a coded token, which we will look at in the next post. So, back to the bytes Now we've got a rough idea of how the metadata is logically arranged, we can now look at the bytes comprising the start of the CLR data within an assembly: The first 8 bytes of the .text section are used by the CLR loader stub. After that, the CLR-specific data starts with the CLI header. I've highlighted the important bytes in the diagram. In order, they are: The size of the header. As the header is a fixed size, this is always 0x48. The CLR major version. This is always 2, even for .NET 4 assemblies. The CLR minor version. This is always 5, even for .NET 4 assemblies, and seems to be ignored by the runtime. The RVA and size of the metadata header. In the diagram, the RVA 0x20e4 corresponds to the file offset 0x2e4 Various flags specifying if this assembly is pure-IL, whether it is strong name signed, and whether it should be run as 32-bit (this is how the CLR differentiates between x86 and AnyCPU assemblies). A token pointing to the entrypoint of the assembly. In this case, 06 (the last byte) refers to the MethodDef table, and 01 00 00 refers to to the first row in that table. (after a gap) RVA of the strong name signature hash, which comes straight after the CLI header. The RVA 0x2050 corresponds to file offset 0x250. The rest of the CLI header is mainly used in mixed-mode assemblies, and so is zeroed in this pure-IL assembly. After the CLI header comes the strong name hash, which is a SHA-1 hash of the assembly using the strong name key. After that comes the bodies of all the methods in the assembly concatentated together. Each method body starts off with a header, which I'll be looking at later. As you can see, this is a very small assembly with only 2 methods (an instance constructor and a Main method). After that, near the end of the .text section, comes the metadata, containing a metadata header and the 5 streams discussed above. We'll be looking at this in the next post. Conclusion The CLI header data doesn't have much to it, but we've covered some concepts that will be important in later posts - the logical structure of the CLR metadata and the overall layout of CLR data within the .text section. Next, I'll have a look at the contents of the #~ stream, and how the table data is arranged on disk.

    Read the article

  • Linux policy routing - packets not coming back

    - by Bugsik
    i am trying to set up policy routing on my home server. My network looks like this: Host routed VPN gateway Internet link through VPN 192.168.0.35/24 ---> 192.168.0.5/24 ---> 192.168.0.1 DSL router 10.200.2.235/22 .... .... 10.200.0.1 VPN server The traffic from 192.168.0.32/27 should be and is routed through VPN. I wanted to define some routing policies to route some traffic from 192.168.0.5 through VPN as well - for start - from user with uid 2000. Policy routing is done using iptables mark target and ip rule fwmark. The problem: When connecting using user 2000 from 192.168.0.5 tcpdump shows outgoing packets, but nothing comes back. Traffic from 192.168.0.35 works fine (here I am not using fwmark but src policy). Here is my VPN gateway setup: # uname -a Linux placebo 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:49:02 UTC 2012 i686 i686 i386 GNU/Linux # iptables -V iptables v1.4.12 # ip -V ip utility, iproute2-ss111117 IPtables rules (all policies in table filter are ACCEPT) # iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 770K packets, 314M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 767K packets, 312M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5520 packets, 1920K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 782K packets, 901M bytes) pkts bytes target prot opt in out source destination 74 4707 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 2000 MARK set 0x3 Chain POSTROUTING (policy ACCEPT 788K packets, 903M bytes) pkts bytes target prot opt in out source destination # iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 996 packets, 51172 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 7 packets, 432 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1364 packets, 112K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 2302 packets, 160K bytes) pkts bytes target prot opt in out source destination 119 7588 MASQUERADE all -- * vpn 0.0.0.0/0 0.0.0.0/0 Routing: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan state UNKNOWN qlen 1000 link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever 3: lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global lan inet6 fe80::240:63ff:fef9:c38f/64 scope link valid_lft forever preferred_lft forever 4: vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100 link/none inet 10.200.2.235/22 brd 10.200.3.255 scope global vpn # ip rule show 0: from all lookup local 32764: from all fwmark 0x3 lookup VPN 32765: from 192.168.0.32/27 lookup VPN 32766: from all lookup main 32767: from all lookup default # ip route show table VPN default via 10.200.0.1 dev vpn 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 # ip route show default via 192.168.0.1 dev lan metric 100 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 TCP dump showing no traffic coming back when connection is made from 192.168.0.5 user 2000 # tcpdump -i vpn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vpn, link-type RAW (Raw IP), capture size 65535 bytes ### Traffic from user 2000 on 192.168.0.5 ### 10:19:05.629985 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6887764 ecr 0,nop,wscale 4], length 0 10:19:21.678001 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6891776 ecr 0,nop,wscale 4], length 0 ### Traffic from 192.168.0.35 ### 10:23:12.066174 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [S], seq 2294159276, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 557451322 ecr 0,sackOK,eol], length 0 10:23:12.265640 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [S.], seq 2521908813, ack 2294159277, win 14480, options [mss 1367,sackOK,TS val 388565772 ecr 557451322,nop,wscale 1], length 0 10:23:12.276573 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [.], ack 1, win 8214, options [nop,nop,TS val 557451534 ecr 388565772], length 0 10:23:12.293030 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [P.], seq 1:480, ack 1, win 8214, options [nop,nop,TS val 557451552 ecr 388565772], length 479 10:23:12.574773 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [.], ack 480, win 7776, options [nop,nop,TS val 388566081 ecr 557451552], length 0

    Read the article

  • SQLRally and SQLRally - Session material

    - by Hugo Kornelis
    I had a great week last week. First at SQLRally Nordic , in Stockholm, where I presented a session on how improvements to the OVER clause can help you simplify queries in SQL Server 2012 enormously. And then I continued straight on into SQLRally Amsterdam , where I delivered a session on the performance implications of using user-defined functions in T-SQL. I understand that both events will make my slides and demo code downloadable from their website, but this may take a while. So those who do not...(read more)

    Read the article

  • Google Analytics and jQuery, happy together

    - by webbes
    Google Analytics is great out of the box already, but you can do much more than just registering your page loads. Especially with all these “Web 2.0” sites it can be convenient to not register page loads, but events! In this blog post I’ll show you how you can use jQuery in combination with Google Analytics to get a great insight on what actually happens on your website while you’re not looking!...(read more)

    Read the article

  • Huawei E3276 LTE uplink slow in the routing Ubuntu, but not with other devices in the LAN

    - by Mytomi
    I have a Huawei E3276 LTE dongle (12d1:14fe - 12d1:1506) and a problem with the upstream speed. The problem is not only present with Ubuntu 14.04 LTS (64 bit workstation, kernel 3.16), but also with Raspbian Jessie for Raspberry PI (kernel 3.14). Upstream seems to be always limited to 5 Mbit/s whenever I check the speed from the Linux computer that I use as a LTE router. The other computers in the LAN always get about 10-15 Mbit/s upstream, even though the traffic is routed through the same Linux computer suffering from seemingly capped uplink. Downstream speed is always fine, 25 Mbit/s. I even installed Windows 7 in the same computer as Ubuntu and the speeds are 25 Mbit/s down, 15 Mbit/s up. So the problem is not with E3276 device itself or in the mobile subscription, but in the Huawei E3276 Linux compatibility. Maybe something in the kernel? I have made sure that the matter is not with iptables rules: the speed does not noticeably increase when iptables is disabled. Turning off IPv4 forwarding does not improve speed either. I'm not sure what settings and logs do help in debugging the situation. Please ask for more details, if you have a clue what might be wrong. Thanks, Mytomi

    Read the article

  • links for 2011-01-06

    - by Bob Rhubart
    Coming to your town: Oracle Enterprise Cloud Summit During these full-day events, cloud experts will share real-world best practices, reference architectures, detailed customer case studies, and more. Events scheduled in cities around the world.  (tags: oracle otn cloud event) Webcast: Security and Compliance for Private Cloud Consolidation Roxana Bradescu, Senior Director for Oracle Database Security Products, discusses Oracle Database Security Solutions to securely consolidate data and meet compliance requirements within private cloud computing environments. Thursday, January 13, 2011. 10am PST | 1pm EST (tags: oracle cloud security) Answering Questions about Mobile Devices | The AppsLab "How do the numbers of Android and iOS users compare? How often are people switching? Where are all these BlackBerry and Nokia users? Do they plan to jump to Android or iOS? What about webOS? Is it relevant?" Some answers in this AppsLab survey. (tags: oracle otn enterprise2.0 mobilecomputing iphone blackberry android) Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Ashish Ray and Matthew Baier discuss Oracle’s Maximum Availability Architecture and Oracle Database 11g. (tags: oracle cloud highavailability webcast) Converting a PV vm back into an HVM vm (Wim Coekaerts Blog) "I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel...It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) @OTN_Garage: Resources for VirtualBox 4.0 Rick "@OTN_Garage" Ramsey shares links to several resources for those with a VirtualBox jones. (tags: oracle otn virtualization virtualbox) 'Federal Service Bus' Helps Belgian Government Speak a Common Language - SOA in Action Blog "The first SOA-enabled application was developed in less than two months and was fully operational in approximately 10 weeks. In addition, new FSB modules are reusable for other Belgian e-government applications, saving both time and taxpayer dollars." - Joe McKendrick (tags: soa oracle) Show Notes: Architects in the Cloud (ArchBeat Podcast) The complete 4-part interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing," is now available. (tags: oracle otn cloud podcast archbeat)

    Read the article

  • Pub banter - content strategy at the ballot box?

    - by Roger Hart
    Last night, I was challenged to explain (and defend) content strategy. Three sheets to the wind after a pub quiz, this is no simple task, but I hope I acquitted myself passably. I say "hope" because there was a really interesting question I couldn't answer to my own satisfaction. I wonder if any of you folks out there in the ethereal internet hive-mind can help me out? A friend - a rather concrete thinker who mathematically models complex biological systems for a living - pointed out that my examples were largely routed in business-to-business web sales and support. He challenged me with: Say you've got a political website, so your goal is to have somebody read it and vote for you - how do you measure the effectiveness of that content? Well, you would. umm. Oh dear. I guess what we're talking about here, to yank it back to my present comfort zone, is a sales process where your point of conversion is off the site. The political example is perhaps a little below the belt, since what you can and can't do, and what data you can and can't collect is so restricted. You can't throw up a "How did you hear about this election?" questionnaire in the polling booth. Exit polls don't pull in your browsing history and site session information. Not everyone fatuously tweets and geo-tags each moment of their lives. Oh, and folks lie. The business example might be easier to attack. You could have, say, a site for a farm shop that only did over the counter sales. Either way, it's tricky. I fell back on some of the work I've done usability testing and benchmarking documentation, and suggested similar, quick and dirty, small sample qualitative UX trials. I'm not wholly sure that was right. Any thoughts? How might we measure and curate for this kind of discontinuous conversion?

    Read the article

  • Securing User Account Details with MySQL

    - by Antoinette O'Sullivan
    Keeping user account details secure is always at the forefront of a Database Administrator's mind. However, users want to get up and running as soon as possible without complex login procedures. You can learn more about this and many other topics in the MySQL for Database Administrator course. For example, MySQL 5.6.6 introduced a new utility: mysql_config_editor, which makes secure access via MySQL client applications much easier to establish, while still providing a good measure of security. The mysql_config_editor stores a user's authentication details in an encrypted login file called mylogin.cnf. This login file is readable and writable for the user who invokes the utility, and invisible to everyone else. You can use it to collect all your hard-to-remember server locations and paswords safe in the knowledge that your passwords are never invoked using clear text. The MySQL for Database Administrators course is a 5-day instructor-led course which is available as a: Training-on-Demand: Start training within 24 hours of registration, following lecture material at your own pace through streaming video and booking time on a lab environment to suit your schedule. Live-Virtual Event: Attend a live event from your own desk, choosing from a selection of events on the schedule to suit different timezones. In-Class Event: Travel to an education center to attend this course. Below is a selection of the events already on the schedule. Location  Date  Delivery Language  Brisbane, Australia  18 August 2014  English  Brussels, Belgium  25 August 2014  English  Sao Paulo, Brazil  2 June 2014  Brazilian Portuguese  Cairo, Egypt  28 September 2014  Arabic  London, England  14 July 2014  English  Belfast, Ireland  15 September 2014  English  Dublin, Ireland  29 September 2014  English  Rome, Italy  16 June 2014  Italian  Seoul, Korea  9 June 2014  Korean  Petaling Jaya, Malaysia  16 June 2014  English  Utrecht, Netherlands  25 August 2014  English  Edinburgh, Scotland  26 June 2014  English  Madrid, Spain  6 October 2014  Spanish  Tunis, Tunisia  27 October 2014  French  Istanbul, Turkey  14 July 2014  Turkish To register for an event, request an additional event or learn more about the authentic MySQL curriculum, go to http://education.oracle.com/mysql. To read more about MySQL security, consult the MySQL Reference Manual - http://dev.mysql.com/doc/refman/5.6/en/security.html.

    Read the article

  • Sitronix ST9RM01 touchscreen not working

    - by Josh Kelley
    I have a Sitronix ST9RM01 touchscreen that I'm trying to get working with Ubuntu 12.04. The touchscreen is apparently recognized by Linux and X - the hid_multitouch module is loaded, and lsinput and xinput both list the touchscreen as an input device - but touching the screen does absolutely nothing, and xinput test shows no events. The same touchscreen works just fine in Windows. How can I troubleshoot from here? Any suggestions?

    Read the article

  • Capturing index operations using a DDL trigger

    - by AaronBertrand
    Today on twitter the following question came up on the #sqlhelp hash tag, from DaveH0ward : Is there a DMV that can tell me the last time an index was rebuilt? SQL 2008 My initial response: I don't believe so, you'd have to be monitoring for that ... perhaps a DDL trigger capturing ALTER_INDEX? Then I remembered that the default trace in SQL Server ( as long as it is enabled ) will capture these events. My follow-up response: You can get it from the default trace, blog post forthcoming So here is...(read more)

    Read the article

  • Kauffman Foundation Selects Stackify to Present at Startup@Kauffman Demo Day

    - by Matt Watson
    Stackify will join fellow Kansas City startups to kick off Global Entrepreneurship WeekOn Monday, November 12, Stackify, a provider of tools that improve developers’ ability to support, manage and monitor their enterprise applications, will pitch its technology at the Startup@Kauffman Demo Day in Kansas City, Mo. Hosted by the Ewing Marion Kauffman Foundation, the event will mark the start of Global Entrepreneurship Week, the world’s largest celebration of innovators and job creators who launch startups.Stackify was selected through a competitive process for a six-minute opportunity to pitch its new technology to investors at Demo Day. In his pitch, Stackify’s founder, Matt Watson, will discuss the current challenges DevOps teams face and reveal how Stackify is reinventing the way software developers provide application support.In October, Stackify had successful appearances at two similar startup events. At Tech Cocktail’s Kansas City Mixer, the company was named “Hottest Kansas City Startup,” and it won free hosting service after pitching its solution at St. Louis, Mo.’s Startup Connection.“With less than a month until our public launch, events like Demo Day are giving Stackify the support and positioning we need to change the development community,” said Watson. “As a serial technology entrepreneur, I appreciate the Kauffman Foundation’s support of startup companies like Stackify. We’re thrilled to participate in Demo Day and Global Entrepreneurship Week activities.”Scheduled to publicly launch in early December 2012, Stackify’s platform gives developers insights into their production applications, servers and databases. Stackify finally provides agile developers safe and secure remote access to look at log files, config files, server health and databases. This solution removes the bottleneck from managers and system administrators who, until now, are the only team members with access. Essentially, Stackify enables development teams to spend less time fixing bugs and more time creating products.Currently in beta, Stackify has already been named a “Company to Watch” by Software Development Times, which called the startup “the next big thing.” Developers can register for a free Stackify account on Stackify.com.###Stackify Founded in 2012, Stackify is a Kansas City-based software service provider that helps development teams troubleshoot application problems. Currently in beta, Stackify will be publicly available in December 2012, when agile developers will finally be able to provide agile support. The startup has already been recognized by Tech Cocktail as “Hottest Kansas City Startup” and was named a “Company to Watch” by Software Development Times. To learn more, visit http://www.stackify.com and follow @stackify on Twitter.

    Read the article

  • Using USB to Ethernet with Linux Ubuntu 12.04

    - by Sriram
    Being a newbie, Please excuse if the technical Jargon used is not an universally accepted one :) I have a particular device (say device A) whose USB2.0 driver is available from Linux community. Linux UBUNTU12.04 based PC is able to detect that device via the available driver. My requirement is to ensure that PC can exchange the command as well as data with the device A over TCP/IP packets (In other words, instead of just a USB Based driver, there should be a TCP/IP wrapper over the device USB driver and still does the same job as the USB driver was doing before) Bought an USB (Female) to RJ-45 adapter,connected Device A (male) USB to the USB Female end of the adapter and the Ethernet end connected to the router. PC also is connected to the same router so that both Device A and the PC have the IP address in the same subnet range. So the packets produced by the device A can be routed to the PC via some binding( not sure how I can achieve this, but conceptual idea) Here are the issues I can see as of now 1) USB to RJ-45 is just a hardware signal conversion and not a NIC in itself and hence no MAC/IP ADDRESS assigned. Can we bind a virtual NIC created in PC with this connector? 2) Any available USB TO IP command as well as data translation wrappers available? e.g. command for the device A on Ethernet converted to command for the device A on USB which is then acted upon the device as a command from the USB driver There is some missing link in my understanding and hence it would be of great help if you can bounce off some ideas on how I can take this forward so that Device A and PC exchange data over IP. Thanks and Regards, Sriram

    Read the article

  • Rebuilding CoasterBuzz, Part III: The architecture using the "Web stack of love"

    - by Jeff
    This is the third post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch in the next month or two. More: Part I: Evolution, and death to WCF Part II: Hot data objects I finally hit a point in the re-do of CoasterBuzz where I feel like the major pieces are in place... rewritten, ported and what not, so that I can focus now on front-end design and more interesting creative problems. I've been asked on more than one occasion (OK, just twice) what's going on under the covers, so I figure this might be a good time to explain the overall architecture. As it turns out, I'm using a whole lof of the "Web stack of love," as Scott Hanselman likes to refer to it. Oh that Hanselman. First off, at the center of it all, is BizTalk. Just kidding. That's "enterprise architecture" humor, where every discussion starts with how they'll use BizTalk. Here are the bigger moving parts: It's fairly straight forward. A common library lives in a number of Web apps, all of which are (or will be) powered by ASP.NET MVC 4. They all talk to the same database. There is the main Web site, which also has the endpoint for the Silverlight-based Feed app. The cstr.bz site handles redirects, which are generated when news items are published and sent to Twitter. Facebook publishing is handled via the RSS Graffiti Facebook app. The API site handles requests from the Windows Phone app. The main site depends very heavily on POP Forums, the open source, MVC-based forum I maintain. It serves a number of functions, primarily handling users. These user objects serve in non-forum roles to handle things like news and database contributions, maintaining track records (coaster nerd for "list of rides I've been on") and, perhaps most importantly, paid club memberships. Before I get into more specifics, note that the "glue" for everything is Ninject, the dependency injection framework. I actually prefer StructureMap these days, but I started with Ninject in POP Forums a long time ago. POP Forums has a static class, PopForumsActivation, that new's up an instance of the container, and you can call it from where ever. The downside is that the forums require Ninject in your MVC app as the default dependency resolver. At some point, I'll decouple it, but for now it's not in the way. In the general sense, the entire set of apps follow a repository-service-controller-view pattern. Repos just do data access, service classes do business logic, controllers compose and route, views view. The forum also provides Scoring Game functionality. The Scoring Game is a reasonably abstract framework to award users points based on certain actions, and then award achievements when a certain number of point events happen. For example, the forum already awards a point when someone plus-one's a post you made. You can set up an achievement that says, "Give the user an award when they've had 100 posts plus'd." It also does zero-point entries into the ledger, so if you make a post, you could award an achievement based on 100 posts made. Wiring in the scoring game to CoasterBuzz functionality is just a matter of going to the Ninject container and getting an instance of the event publisher, and passing it events. Forum adapters were introduced into POP Forums a few versions ago, and they can intercept the model generated for forum topic lists and threads and designate an alternate view. These are used to make the "Day in Pictures" forum, where users can upload photos as frame-by-frame photo threads. Another adapter adds an association UI, so users can associate specific amusement parks with their trip report posts. The Silverlight-based Feed app talks to a simple JSON endpoint in the main app. This uses an underlying library I wrote ages ago, simply called Feeds, that aggregates event information. You inherit from a base class that creates instances of a publisher interface, and then use that class to send it an event type and any number of data fields. Feeds has two publishers: One is to the database, and that's used for the endpoint that talks to the Silverlight app. The second publisher publishes to Twitter, if the event is of the type "news." The wiring is a little strange, because for the new posts and topics events, I'm actually pulling out the forum repository classes from the Ninject container and replacing them with overridden methods to publish. I should probably be doing this at the service class level, but whatever. It's my mess. cstr.bz doesn't do anything interesting. It looks up the path, and if it has a match, does a 301 redirect to the long URL. The API site just serves up JSON for the Windows Phone app. The Windows Phone app is Silverlight, of course, and there isn't much to it. It does use the control toolkit, but beyond that, it relies on a simple class that creates a Webclient and calls the server for JSON to deserialize. The same class is now used by the Feed app, which used to use WCF. Simple is better. Data access in POP Forums is all straight SQL, because a lot of it was ported from the ASP.NET version. Most CoasterBuzz data access is handled by the Entity Framework, using the code-first model. The context class in this case does a lot of work to make sure that the table and key mapping works, since much of it breaks from the normal conventions of EF. One of the more powerful things you can do with EF, once you understand the little gotchas, is split tables by row into different entities. For example, a roller coaster photo has everything in the same row, including the metadata, the thumbnail bytes and the image itself. Obviously, if you want to get a list of photos to iterate over in a view, you don't want to get the image data. The use of navigation properties makes it easier to get just what you want. The front end includes Razor views in MVC, and jQuery is used for client-side goodness. I'm also using jQuery UI in a few places, for tabs, a dialog box and autocomplete. I'm also, tentatively, using jQuery Mobile. I've already ported most forum views to Mobile, but they need some work as v1.1 isn't finished yet. I'm not sure if I'll ship CoasterBuzz with mobile views or not yet. It's on the radar, but not something in my delivery criteria. That covers all of the big frameworks in play. Next time I hope to talk more about the front-end experience, which to me is where most of the fun is these days. Hoping to launch in the next month or two. Getting tired of looking at the old site!

    Read the article

  • Penguins Converge on Austin Texas

    <b>Blog of Helios:</b> "While several cities and states across the US have hosted Linux events, Texas has yet to be home to efforts such as the Ohio LinuxFest SCALE or The Atlanta Linux Fest That is until now."

    Read the article

  • What are the basic features of an email module in a common web application?

    - by Coral Doe
    When developing an email module, what are the features to have in mind, besides actual email sending? I am talking about an email module that notifies users of events and periodically sends reports. The only other feature I have in mind is maintaining grey/black lists for users that do illegal operations in the system or any other things that may lead to email/domain/IP banning. Is there an etiquette for developing email modules? Are there some references of requirements for such modules?

    Read the article

  • SSIS and StreamInsight Working Together.

    I have been thinking a lot recently about what it would be like to have StreamInsight and SSIS working together.  Well the CAT team have produced a paper on some of our options here. Here are some of my thoughts. There is of course a slight mismatch in their types of usage.  StreamInsight is an Event Stream processing engine capable of operating on new data in the sub second timeframe.  The engine allows you to do real time analytics and take decisions on events that have potentially only just happened.  SSIS on the other hand is a batch processing engine.  In general I do not like having to invoke the same package more than once every 90 seconds or so as it can start to get expensive.  Usually when doing batch processing we have an hour or longer of grace before we have to move data from A –> B. StreamInsight operates on streams of data.  Before anyone mentions it yes I know StreamInsight is equally adept at using the IEnumerable interface, but I would argue live streaming and real-time analytics is a primary goal of the product.  SSIS does not have an “Always On” button I do not like the idea of embedding StreamInsight inside SSIS using a transform particularly.  It means StreamInsight becomes a batch processing engine because it can only operate when the SSIS package is running and SSIS is in charge of when that happens. If I am to have StreamInsight within SSIS then I prefer to have StreamInsight on the adapters.  This way you can force the adapters to stay open and introduce events into your Pipeline.   SSIS has a much richer set of transforms out of the box than StreamInsight.  Although “Always On” was not a design goal of SSIS I have used it like this and it works just fine. SSIS being called from within StreamInsight, now that excites me.  see below   For a while now I have been thinking what it would be like to decouple the Data Flow task from the SSIS package and expose it as something with which you can interact.  Anything can instantiate this version of a DFT as it would expose one or more  input interfaces and one or more output interfaces.  I can imagine that this would be a big hit when moving to “The Cloud” as well.  I could see the Data Flow task maybe being hosted in Azure Appfabric or some such layer. StreamInsight would be able to take advantage of this as well.   I am interested to see where this goes and will be pressing for more meat around the subject when I visit Redmond soon.

    Read the article

  • Webcast: The Power to Translate is Now Inside Oracle WebCenter Sites

    - by kellsey.ruppel
    The Power to Translate is Now Inside Oracle WebCenter Sites You are invited to a special preview of the Lingotek Inside Oracle WebCenter Sites solution which will be showcased at Collaborate in Las Vegas later in April. Register Now! Now it's easy to quickly translate your content directly from Oracle WebCenter Sites using the new Lingotek - Inside for Oracle WebCenter Sites integration. Your users will be able to access translated content, nominate content for translation, and even offer to translate content themselves. Lingotek - Inside Integration: Content identified and seamlessly viewable within Lingotek Workbench. Translation Completed by: Machine and Translation Memory Community Volunteers, Crowdsourcing Professional Translators Translated Content Automatically Saved. Content within Oracle WebCenter Sites: Related Secured Routed Through Workflows Publish to Intranets, Web Sites, Applications Oracle WebCenter Sites Web Experience Management Enables marketers and business users to easily create and manage contextually relevant, social, and interactive online experiences across multiple channels on a global scale. Drive customer acquisition, brand loyalty, and business success Optimize customer engagement across Web, mobile, and social channels Manage large-scale, multichannel global online presence with integration to enterprise applications Register Now! You'll hear from the experts how this can be done. Free 30 Minute Webinar Date: Tues, Apr 17thTime: 8:00am MST, 3pm GMT and 4pm CET Win a Kindle Fire Register before April 6th for a chance to win a Amazon Kindle Fire! Presenter: Rob Vandenberg, President and CEO of Lingotek, drives the vision while leading the charge to change the future of translation. Rob is a well-known technology industry veteran, and his expertise and knowledge surrounding translation, localization, and internationalization materials, software products, and web content serves as an immeasurable asset to customers needs and requirements. Rob is a frequent industry speaker and panelist . Presenter: Andrew PalmerOracleEMEA Alliances DirectorWebCenter Sites System RequirementsPC-based attendeesRequired: Windows® 7, Vista, XP or 2003 ServerMacintosh®-based attendeesRequired: Mac OS® X 10.5 or newer

    Read the article

  • Some new free tools enter the SQL marketplace

    - by AaronBertrand
    A while back, I started collecting links for free SQL Server resources available to everyone in the community. I created a blog post called " Useful, free resources for SQL Server " to serve as a launching point for the links I'd been collecting. I'm in the process of going back and updating that post, but in the meantime, I wanted to highlight a couple of big events that happened in the past week. Atlantis Interactive Last week Matt Whitfield ( blog | twitter ) announced that his company's commercial...(read more)

    Read the article

  • Zyxel p-2602HW-1DA - LAN to WAN routing problems

    - by Garrett
    Hi Got a new router yesterday (due to new internet supplier) and now all my requests for my own server (local lan) is routed directly to the router instead of the server, when using dns. Ex. I have a website www.mysite.org running on my server at home (local lan). From work I can access it via www.mysite.org, which is great. But from home (local lan) my request's for www.mysite.org gets rerouted to the routers web admin interface My last router didn't do this. My new router is a Zyxel P-2602HW-1DA, my old one was a LinkSys WRT-54GC V. 2.0. There's a rather wierd WAN-LAN, WAN-WAN setup interface which I cant really comprehend yet and the docs are rather vague. Have anyone had the same problem and can anyone guide me to a solution, would nice not write the ip address everytime i need to access the server on local lan. :). Kind regards Garrett

    Read the article

  • WCF Data Service Pipeline

    - by Daniel Cazzulino
    For documentation purposes, I just draw the following UML sequence diagrams for the “Astoria” pipeline, using Visual Studio 2010 Ultimate: For a single-entity (or non-batched) request, this is the sequence: For a batch request, this is the sequence instead: DataService component is your own DataService<T>-derived class, and DataService.ProcessingPipeline refers to its ProcessingPipeline property pipeline events.   /kzu

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >