Search Results

Search found 41623 results on 1665 pages for 'type constraints'.

Page 1554/1665 | < Previous Page | 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561  | Next Page >

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • facebook application using iframe on Facebook Developer Toolkit 3.0

    - by adveb
    hey i am trying to build facebook iframe application using the Facebook Developer Toolkit 3.01 asp.net c#. i am working by the ifrmae sample of the toolkit can be download here. www.facebooktoolkit.codeplex.com/releases/view/39727 this is my facebook application that is the same as the iframe sample. http://apps.facebook.com/alefbet/ this is my code, it has 2 pages, master page and default. this 2 pages are the same as the iframe sample. 1) this is the master page. public partial class IFrameMaster : Facebook.Web.CanvasIFrameMasterPage { public IFrameMaster() { RequireLogin = true; } } 2) this is the default.aspx public partial class Default : System.Web.UI.Page { private const string SCRIPT_BLOCK_NAME = "dynamicScript"; protected void Page_Load(object sender, EventArgs e) { if (IsPostBack) { if (Master.Api.Users.HasAppPermission(Enums.ExtendedPermissions.email)) { SendThankYouEmail(); } Response.Redirect("ThankYou.aspx"); } else { if (Master.Api.Users.HasAppPermission(Enums.ExtendedPermissions.email)) { emailPermissionPanel.Visible = false; } CreateScript(); } } private void SendThankYouEmail() { var subject = "Thank you for telling us your favorite color"; var body = "Thank you for telling us what your favorite color is. We hope you have enjoyed using this application. Encourage your friends to tell us their favorite color as well!"; this.Master.Api.Notifications.SendEmail(this.Master.Api.Session.UserId.ToString(), subject, body, string.Empty); } private void CreateScript() { var saveColorScript = @" function saveColor(color) { document.getElementById('" + colorInput.ClientID + @"').value = color; } function submitForm() { document.getElementById('" + form.ClientID + @"').submit(); } "; if (!ClientScript.IsClientScriptBlockRegistered(SCRIPT_BLOCK_NAME)) { ClientScript.RegisterClientScriptBlock(this.GetType(), SCRIPT_BLOCK_NAME, saveColorScript); } } } my directory structure is 1)the master page is in the root. 2)the default.aspx is in the root/alfbet directory. 3)i have also have the xd_receiver.htm inside root/channel directory. that inside the master page their is the folowing line: <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function() { FB.Facebook.init("c81f17ee4d4ffc5113c55f8b99fdcab5", "channel/xd_receiver.htm"); }); </script> the problem is that the applicatin dosent work apps.facebook.com/alefbet/default.aspx why it dosent work ? please help me and others who also obstacle in this issue. i tryied lots of things, one of them was to display the user id. for that i put label in the default.aspx and wrote lblTest.Text = Master.Api.Users.GetInfo().uid.ToString(); and it dosent event get to this line. i know it because it keeps display in the label.text the word "label" thank you very much.

    Read the article

  • Validating an XML document fragment against XML schema

    - by shylent
    Terribly sorry if I've failed to find a duplicate of this question. I have a certain document with a well-defined document structure. I am expressing that structure through an XML schema. That data structure is operated upon by a RESTful service, so various nodes and combinations of nodes (not the whole document, but fragments of it) are exposed as "resources". Naturally, I am doing my own validation of the actual data, but it makes sense to validate the incoming/outgoing data against the schema as well (before the fine-grained validation of the data). What I don't quite grasp is how to validate document fragments given the schema definition. Let me illustrate: Imagine, the example document structure is: <doc-root> <el name="foo"/> <el name="bar"/> </doc-root> Rather a trivial data structure. The schema goes something like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="doc-root"> <xsd:complexType> <xsd:sequence> <xsd:element name="el" type="myCustomType" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="myCustomType"> <xsd:attribute name="name" use="required" /> </xsd:complexType> </xsd:schema> Now, imagine, I've just received a PUT request to update an 'el' object. Naturally, I would receive not the full document or not any xml, starting with 'doc-root' at its root, but the 'el' element itself. I would very much like to validate it against the existing schema then, but just running it through a validating parser wouldn't work, since it will expect a 'doc-root' at the root. So, again, the question is, - how can one validate a document fragment against an existing schema, or, perhaps, how can a schema be written to allow such an approach. Hope it made sense.

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • ASP.NET 3.5 GridView - row editing - dynamic binding to a DropDownList

    - by marc_s
    This is driving me crazy :-) I'm trying to get a ASP.NET 3.5 GridView to show a selected value as string when being displayed, and to show a DropDownList to allow me to pick a value from a given list of options when being edited. Seems simple enough? My gridview looks like this (simplified): <asp:GridView ID="grvSecondaryLocations" runat="server" DataKeyNames="ID" OnInit="grvSecondaryLocations_Init" OnRowCommand="grvSecondaryLocations_RowCommand" OnRowCancelingEdit="grvSecondaryLocations_RowCancelingEdit" OnRowDeleting="grvSecondaryLocations_RowDeleting" OnRowEditing="grvSecondaryLocations_RowEditing" OnRowUpdating="grvSecondaryLocations_RowUpdating" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:Label ID="lblPbxTypeCaption" runat="server" Text='<%# Eval("PBXTypeCaptionValue") %>' /> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="ddlPBXTypeNS" runat="server" Width="200px" DataTextField="CaptionValue" DataValueField="OID" /> </EditItemTemplate> </asp:TemplateField> </asp:GridView> The grid gets displayed OK when not in editing mode - the selected PBX type shows its value in the asp:Label control. No surprise there. I load the list of values for the DropDownList into a local member called _pbxTypes in the OnLoad event of the form. I verified this - it works, the values are there. Now my challenge is: when the grid goes into editing mode for a particular row, I need to bind the list of PBX's stored in _pbxTypes. Simple enough, I thought - just grab the drop down list object in the RowEditing event and attach the list: protected void grvSecondaryLocations_RowEditing(object sender, GridViewEditEventArgs e) { grvSecondaryLocations.EditIndex = e.NewEditIndex; GridViewRow editingRow = grvSecondaryLocations.Rows[e.NewEditIndex]; DropDownList ddlPbx = (editingRow.FindControl("ddlPBXTypeNS") as DropDownList); if (ddlPbx != null) { ddlPbx.DataSource = _pbxTypes; ddlPbx.DataBind(); } .... (more stuff) } Trouble is - I never get anything back from the FindControl call - seems like the ddlPBXTypeNS doesn't exist (or can't be found). What am I missing?? Must be something really stupid.... but so far, all my Googling, reading up on GridView controls, and asking buddies hasn't helped. Who can spot the missing link? ;-) Marc

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

  • build error with boost spirit grammar (boost 1.43 and g++ 4.4.1)

    - by lurscher
    I'm having issues getting a small spirit/qi grammar to compile. The build stack trace is fugly enought to not make any sense to me (despite some assertion_failed i could notice in there but that didn't brought much information) the input grammar header: inputGrammar.h #include <boost/config/warning_disable.hpp> #include <boost/spirit/include/qi.hpp> #include <boost/spirit/include/phoenix_core.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_fusion.hpp> #include <boost/spirit/include/phoenix_stl.hpp> #include <boost/fusion/include/adapt_struct.hpp> #include <boost/variant/recursive_variant.hpp> #include <boost/foreach.hpp> #include <iostream> #include <fstream> #include <string> #include <vector> namespace sp = boost::spirit; namespace qi = boost::spirit::qi; using namespace boost::spirit::ascii; //using namespace boost::spirit::arg_names; namespace fusion = boost::fusion; namespace phoenix = boost::phoenix; using phoenix::at_c; using phoenix::push_back; template< typename Iterator , typename ExpressionAST > struct InputGrammar : qi::grammar<Iterator, ExpressionAST(), space_type> { InputGrammar() : InputGrammar::base_type( block ) { tag = sp::lexeme[+(alpha) [sp::_val += sp::_1]];//[+(char_ - '<') [_val += _1]]; block = sp::lit("block") [ at_c<0>(sp::_val) = sp::_1] >> "(" >> *instruction[ push_back( at_c<1>(sp::_val) , sp::_1 ) ] >> ")"; command = tag [ at_c<0>(sp::_val) = sp::_1] >> "(" >> *instruction [ push_back( at_c<1>(sp::_val) , sp::_1 )] >> ")"; instruction = ( command | tag ) [sp::_val = sp::_1]; } qi::rule< Iterator , std::string() , space_type > tag; qi::rule< Iterator , ExpressionAST() , space_type > block; qi::rule< Iterator , ExpressionAST() , space_type > function_def; qi::rule< Iterator , ExpressionAST() , space_type > command; qi::rule< Iterator , ExpressionAST() , space_type > instruction; }; the test build program: i seems the build fails at qi::phrase_parse, i am using boost 1.43 and g++ 4.4.1 #include <iostream> #include <string> #include <vector> using namespace std; //my grammar #include <InputGrammar.h> struct MockExpressionNode { std::string name; std::vector< MockExpressionNode > operands; typedef std::vector< MockExpressionNode >::iterator iterator; typedef std::vector< MockExpressionNode >::const_iterator const_iterator; iterator begin() { return operands.begin(); } const_iterator begin() const { return operands.begin(); } iterator end() { return operands.end(); } const_iterator end() const { return operands.end(); } bool is_leaf() const { return ( operands.begin() == operands.end() ); } }; BOOST_FUSION_ADAPT_STRUCT( MockExpressionNode, (std::string, name) (std::vector<MockExpressionNode>, operands) ) int const tabsize = 4; void tab(int indent) { for (int i = 0; i < indent; ++i) std::cout << ' '; } template< typename ExpressionNode > struct ExpressionNodePrinter { ExpressionNodePrinter(int indent = 0) : indent(indent) { } void operator()(ExpressionNode const& node) const { cout << " tag: " << node.name << endl; for (int i=0 ; i < node.operands.size() ; i++ ) { tab( indent ); cout << " arg "<<i<<": "; ExpressionNodePrinter(indent + 2)( node.operands[i]); cout << endl; } } int indent; }; int test() { MockExpressionNode root; InputGrammar< string::const_iterator , MockExpressionNode > g(); std::string litA = "litA"; std::string litB = "litB"; std::string litC = "litC"; std::string litD = "litD"; std::string litE = "litE"; std::string litF = "litF"; std::string source = litA+"( "+litB+" ,"+litC+" , "+ litD+" ( "+litE+", "+litF+" ) "+ " )"; string::const_iterator iter = source.begin(); string::const_iterator end = source.end(); bool r = qi::phrase_parse( iter , end , g , root , space ); ExpressionNodePrinter< MockExpressionNode > np; np( root ); }; int main() { test(); } finally, the build error is the following: /usr/bin/make -f nbproject/Makefile-linux_amd64_devel.mk SUBPROJECTS= .build-conf make[1]: se ingresa al directorio `/home/mineq/NetBeansProjects/InputParserTests' /usr/bin/make -f nbproject/Makefile-linux_amd64_devel.mk dist/linux_amd64_devel/GNU-Linux-x86/vpuinputparsertests make[2]: se ingresa al directorio `/home/mineq/NetBeansProjects/InputParserTests' mkdir -p build/linux_amd64_devel/GNU-Linux-x86 rm -f build/linux_amd64_devel/GNU-Linux-x86/tests_main.o.d g++ `llvm-config --cxxflags` `pkg-config --cflags unittest-cpp` `pkg-config --cflags boost-1.43` `pkg-config --cflags boost-coroutines` -c -g -I../InputParser -MMD -MP -MF build/linux_amd64_devel/GNU-Linux-x86/tests_main.o.d -o build/linux_amd64_devel/GNU-Linux-x86/tests_main.o tests_main.cpp from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:16, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp: In function ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/detail/parse_auto.hpp:14, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:99: error: no matching function for call to ‘assertion_failed(mpl_::failed************ (boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::error_invalid_expression::************)(InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (*)()))’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:100: error: no matching function for call to ‘assertion_failed(mpl_::failed************ (boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::error_invalid_expression::************)(MockExpressionNode))’ from /home/mineq/third_party/boost_1_43_0/boost/proto/proto.hpp:12, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/support/meta_compiler.hpp:17, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/meta_compiler.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/action/action.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/action.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/proto/detail/expr0.hpp: At global scope: /home/mineq/third_party/boost_1_43_0/boost/proto/proto_fwd.hpp: In instantiation of ‘boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>’: In file included from /home/mineq/third_party/boost_1_43_0/boost/proto/core.hpp:13, /home/mineq/third_party/boost_1_43_0/boost/utility/enable_if.hpp:59: instantiated from ‘boost::disable_if<boost::proto::result_of::is_expr<boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>, void>, void>’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/support/meta_compiler.hpp:200: instantiated from ‘boost::spirit::result_of::compile<boost::spirit::qi::domain, InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), boost::fusion::unused_type, void>’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:107: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/proto/detail/expr0.hpp:64: error: field ‘boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>::child0’ invalidly declared function type from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:16, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp: In function ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/detail/parse_auto.hpp:14, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:107: error: request for member ‘parse’ in ‘boost::spirit::compile [with Domain = boost::spirit::qi::domain, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()](((InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (&)())((InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (*)())expr)))’, which is of non-class type ‘InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()’ from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/skip_over.hpp: In function ‘void boost::spirit::qi::skip_over(Iterator&, const Iterator&, const T&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, T = boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::skipper_type]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto/auto.hpp:19, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:112: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/skip_over.hpp:27: error: ‘const struct MockExpressionNode’ has no member named ‘parse’ make[2]: *** [build/linux_amd64_devel/GNU-Linux-x86/tests_main.o] Error 1 make[2]: se sale del directorio `/home/mineq/NetBeansProjects/InputParserTests' make[1]: *** [.build-conf] Error 2 make[1]: se sale del directorio `/home/mineq/NetBeansProjects/InputParserTests' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1m 48s)

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • Creating an AJAX Accordion Menu

    - by jaullo
    Introduction Ajax is a powerful addition to asp.net that provides new functionality in a simple and agile  way This post is dedicated to creating a menu with ajax accordion type. About the Control The basic idea of this control, is to provide a serie of panels and show and hide information inside these panels. The use is very simple, we have to set each panel inside accordion control and give to each panel a Header and of course, we have to set the content of each panel.  To use accordion control, u need the ajax control toolkit. know the basic propertyes of accordion control:  Before start developing an accordion control, we have to know the basic properties for this control Other accordion propertyes  FramesPerSecond - Number of frames per second used in the transition animations RequireOpenedPane - Prevent closing the currently opened pane when its header is clicked (which ensures one pane is always open). The default value is true. SuppressHeaderPostbacks - Prevent the client-side click handlers of elements inside a header from firing (this is especially useful when you want to include hyperlinks in your headers for accessibility) DataSource - The data source to use. DataBind() must be called. DataSourceID - The ID of the data source to use. DataMember - The member to bind to when using a DataSourceID  AJAX Accordion Control Extender DataSource  The Accordion Control extender of AJAX Control toolkit can also be used as DataBound control. You can bind the data retrieved from the database to the Accordion control. Accordion Control consists of properties such as DataSource and DataSourceID (we can se it above) that can be used to bind the data. HeaderTemplate can used to display the header or title for the pane generated by the Accordion control, a click on which will open or close the ContentTemplate generated by binding the data with Accordion extender. When DataSource is passed to the Accordion control, also use the DataBind method to bind the data. The Accordion control bound with data auto generates the expand/collapse panes along with their headers.  This code represents the basic steps to bind the Accordion to a Datasource Collapse Public Sub getCategories() Dim sqlConn As New SqlConnection(conString) sqlConn.Open() Dim sqlSelect As New SqlCommand("SELECT * FROM Categories", sqlConn) sqlSelect.CommandType = System.Data.CommandType.Text Dim sqlAdapter As New SqlDataAdapter(sqlSelect) Dim myDataset As New DataSet() sqlAdapter.Fill(myDataset) sqlConn.Close() Accordion1.DataSource = myDataset.Tables(0).DefaultView Accordion1.DataBind()End Sub Protected Sub Accordion1_ItemDataBound(sender As Object, _ e As AjaxControlToolkit.AccordionItemEventArgs) If e.ItemType = AjaxControlToolkit.AccordionItemType.Content Then Dim sqlConn As New SqlConnection(conString) sqlConn.Open() Dim sqlSelect As New SqlCommand("SELECT productName " & _ "FROM Products where categoryID = '" + _ DirectCast(e.AccordionItem.FindControl("txt_categoryID"),_ HiddenField).Value + "'", sqlConn) sqlSelect.CommandType = System.Data.CommandType.Text Dim sqlAdapter As New SqlDataAdapter(sqlSelect) Dim myDataset As New DataSet() sqlAdapter.Fill(myDataset) sqlConn.Close() Dim grd As New GridView() grd = DirectCast(e.AccordionItem.FindControl("GridView1"), GridView) grd.DataSource = myDataset grd.DataBind() End If End Sub In the above code, we made two things, first, we made a sql select to database to retrieve all data from categories table, this data will be used to set the header and columns of the accordion.  Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <ajaxToolkit:Accordion ID="Accordion1" runat="server" TransitionDuration="100" FramesPerSecond="200" FadeTransitions="true" RequireOpenedPane="false" OnItemDataBound="Accordion1_ItemDataBound" ContentCssClass="acc-content" HeaderCssClass="acc-header" HeaderSelectedCssClass="acc-selected"> <HeaderTemplate> <%#DataBinder.Eval(Container.DataItem,"categoryName") %> </HeaderTemplate> <ContentTemplate> <asp:HiddenField ID="txt_categoryID" runat="server" Value='<%#DataBinder.Eval(Container.DataItem,"categoryID") %>' /> <asp:GridView ID="GridView1" runat="server" RowStyle-BackColor="#ededed" RowStyle-HorizontalAlign="Left" AutoGenerateColumns="false" GridLines="None" CellPadding="2" CellSpacing="2" Width="300px"> <Columns> <asp:TemplateField HeaderStyle-HorizontalAlign="Left" HeaderText="Product Name" HeaderStyle-BackColor="#d1d1d1" HeaderStyle-ForeColor="#777777"> <ItemTemplate> <%#DataBinder.Eval(Container.DataItem,"productName") %> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </ContentTemplate> </ajaxToolkit:Accordion>  Here, we use <%#DataBinder.Eval(Container.DataItem,"categoryName") %> to bind accordion header with categoryName, so we made on header for each element found on database.    Creating a basic accordion control As we know, to use any of the ajax components, there must be a registered ScriptManager on our site, which will be responsible for managing our controls. So the first thing we will do is create our script manager.     Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> Then we define our accordion  element and establish some basic properties:    Collapse <cc1:Accordion ID="AccordionCtrl" runat="server" SelectedIndex="0" HeaderCssClass="accordionHeader" ContentCssClass="accordionContent" AutoSize="None" FadeTransitions="true" TransitionDuration="250" FramesPerSecond="40" For our work we must declare PANES accordion inside it, these breads will be responsible for contain information, links or information that we want to show.  Collapse <Panes> <cc1:AccordionPane ID="AccordionPane0" runat="server"> <Header>Matenimiento</Header> <Content> <li><a href="mypagina.aspx">My página de prueba</a></li> </Content> </cc1:AccordionPane> To end this work, we have to close all panels and our accordion Collapse </Panes> </cc1:Accordion> Finally complete our example should look like:  Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> <cc1:Accordion ID="AccordionCtrl" runat="server" SelectedIndex="0" HeaderCssClass="accordionHeader" ContentCssClass="accordionContent" AutoSize="None" FadeTransitions="true" TransitionDuration="250" FramesPerSecond="40"> <Panes> <cc1:AccordionPane ID="AccordionPane0" runat="server"> <Header>Matenimiento</Header> <Content> <li><a href="mypagina.aspx">My página de prueba</a></li> </Content> </cc1:AccordionPane> </Panes> </cc1:Accordion>

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Random movement of wandering monsters in x & y axis in LibGDX

    - by Vishal Kumar
    I am making a simple top down RPG game in LibGDX. What I want is ... the enemies should wander here and there in x and y directions in certain interval so that it looks natural that they are guarding something. I spend several hours doing this but could not achieve what I want. After a long time of coding, I came with this code. But what I observed is when enemies come to an end of x or start of x or start of y or end of y of the map. It starts flickering for random intervals. Sometimes they remain nice, sometimes, they start flickering for long time. public class Enemy extends Sprite { public float MAX_VELOCITY = 0.05f; public static final int MOVING_LEFT = 0; public static final int MOVING_RIGHT = 1; public static final int MOVING_UP = 2; public static final int MOVING_DOWN = 3; public static final int HORIZONTAL_GUARD = 0; public static final int VERTICAL_GUARD = 1; public static final int RANDOM_GUARD = 2; private float startTime = System.nanoTime(); private static float SECONDS_TIME = 0; private boolean randomDecider; public int enemyType; public static final float width = 30 * World.WORLD_UNIT; public static final float height = 32 * World.WORLD_UNIT; public int state; public float stateTime; public boolean visible; public boolean dead; public Enemy(float x, float y, int enemyType) { super(x, y); state = MOVING_LEFT; this.enemyType = enemyType; stateTime = 0; visible = true; dead = false; boolean random = Math.random()>=0.5f ? true :false; if(enemyType == HORIZONTAL_GUARD){ if(random) velocity.x = -MAX_VELOCITY; else velocity.x = MAX_VELOCITY; } if(enemyType == VERTICAL_GUARD){ if(random) velocity.y = -MAX_VELOCITY; else velocity.y = MAX_VELOCITY; } if(enemyType == RANDOM_GUARD){ //if(random) //velocity.x = -MAX_VELOCITY; //else //velocity.y = MAX_VELOCITY; } } public void update(Enemy e, float deltaTime) { super.update(deltaTime); e.stateTime+= deltaTime; e.position.add(velocity); // This is for updating the Animation for Enemy Movement Direction. VERY IMPORTANT FOR REAL EFFECTS updateDirections(); //Here the various movement methods are called depending upon the type of the Enemy if(e.enemyType == HORIZONTAL_GUARD) guardHorizontally(); if(e.enemyType == VERTICAL_GUARD) guardVertically(); if(e.enemyType == RANDOM_GUARD) guardRandomly(); //quadrantMovement(e, deltaTime); } private void guardHorizontally(){ if(position.x <= 0){ velocity.x= MAX_VELOCITY; velocity.y= 0; } else if(position.x>= World.mapWidth-width){ velocity.x= -MAX_VELOCITY; velocity.y= 0; } } private void guardVertically(){ if(position.y<= 0){ velocity.y= MAX_VELOCITY; velocity.x= 0; } else if(position.y>= World.mapHeight- height){ velocity.y= -MAX_VELOCITY; velocity.x= 0; } } private void guardRandomly(){ if (System.nanoTime() - startTime >= 1000000000) { SECONDS_TIME++; if(SECONDS_TIME % 5==0) randomDecider = Math.random()>=0.5f ? true :false; if(SECONDS_TIME>=30) SECONDS_TIME =0; startTime = System.nanoTime(); } if(SECONDS_TIME <=30){ if(randomDecider && position.x >= 0) velocity.x= -MAX_VELOCITY; else{ if(position.x < World.mapWidth-width) velocity.x= MAX_VELOCITY; else velocity.x= -MAX_VELOCITY; } velocity.y =0; } else{ if(randomDecider && position.y >0) velocity.y= -MAX_VELOCITY; else velocity.y= MAX_VELOCITY; velocity.x =0; } /* //This is essential so as to keep the enemies inside the boundary of the Map if(position.x <= 0){ velocity.x= MAX_VELOCITY; //velocity.y= 0; } else if(position.x>= World.mapWidth-width){ velocity.x= -MAX_VELOCITY; //velocity.y= 0; } else if(position.y<= 0){ velocity.y= MAX_VELOCITY; //velocity.x= 0; } else if(position.y>= World.mapHeight- height){ velocity.y= -MAX_VELOCITY; //velocity.x= 0; } */ } private void updateDirections() { if(velocity.x > 0) state = MOVING_RIGHT; else if(velocity.x<0) state = MOVING_LEFT; else if(velocity.y>0) state = MOVING_UP; else if(velocity.y<0) state = MOVING_DOWN; } public Rectangle getBounds() { return new Rectangle(position.x, position.y, width, height); } private void quadrantMovement(Enemy e, float deltaTime) { int temp = e.getEnemyQuadrant(e.position.x, e.position.y); boolean random = Math.random()>=0.5f ? true :false; switch(temp){ case 1: velocity.x = MAX_VELOCITY; break; case 2: velocity.x = MAX_VELOCITY; break; case 3: velocity.x = -MAX_VELOCITY; break; case 4: velocity.x = -MAX_VELOCITY; break; default: if(random) velocity.x = MAX_VELOCITY; else velocity.y =-MAX_VELOCITY; } } public float getDistanceFromPoint(float p1,float p2){ Vector2 v1 = new Vector2(p1,p2); return position.dst(v1); } private int getEnemyQuadrant(float x, float y){ Rectangle enemyQuad = new Rectangle(x, y, 30, 32); if(ScreenQuadrants.getQuad1().contains(enemyQuad)) return 1; if(ScreenQuadrants.getQuad2().contains(enemyQuad)) return 2; if(ScreenQuadrants.getQuad3().contains(enemyQuad)) return 3; if(ScreenQuadrants.getQuad4().contains(enemyQuad)) return 4; return 0; } } Is there a better way of doing this. I am new to game development. I shall be very grateful to any help or reference.

    Read the article

  • ASP.NET MVC Html.ValidationSummary(true) does not display model errors

    - by msi
    I have some problem with Html.ValidationSummary. I don't want to display property errors in ValidationSummary. And when I set Html.ValidationSummary(true) it does not display error messages from ModelState. When there is some Exception in controller action on string MembersManager.RegisterMember(member); catch section adds an error to the ModelState: ModelState.AddModelError("error", ex.Message); But ValidationSummary does not display this error message. When I set Html.ValidationSummary(false) all messages are displaying, but I don't want to display property errors. How can I fix this problem? Here is the code I'm using: Model: public class Member { [Required(ErrorMessage = "*")] [DisplayName("Login:")] public string Login { get; set; } [Required(ErrorMessage = "*")] [DataType(DataType.Password)] [DisplayName("Password:")] public string Password { get; set; } [Required(ErrorMessage = "*")] [DataType(DataType.Password)] [DisplayName("Confirm Password:")] public string ConfirmPassword { get; set; } } Controller: [HttpPost] public ActionResult Register(Member member) { try { if (!ModelState.IsValid) return View(); MembersManager.RegisterMember(member); } catch (Exception ex) { ModelState.AddModelError("error", ex.Message); return View(member); } } View: <% using (Html.BeginForm("Register", "Members", FormMethod.Post, new { enctype = "multipart/form-data" })) {%> <p> <%= Html.LabelFor(model => model.Login)%> <%= Html.TextBoxFor(model => model.Login)%> <%= Html.ValidationMessageFor(model => model.Login)%> </p> <p> <%= Html.LabelFor(model => model.Password)%> <%= Html.PasswordFor(model => model.Password)%> <%= Html.ValidationMessageFor(model => model.Password)%> </p> <p> <%= Html.LabelFor(model => model.ConfirmPassword)%> <%= Html.PasswordFor(model => model.ConfirmPassword)%> <%= Html.ValidationMessageFor(model => model.ConfirmPassword)%> </p> <div> <input type="submit" value="Create" /> </div> <%= Html.ValidationSummary(true)%> <% } %>

    Read the article

  • System.UriFormatException: Invalid URI: The hostname could not be parsed.

    - by Shane
    All of a sudden I'm getting the following error on my website. It doesnt access a db. just a simple website using .net 2.0. I did recently apply the available windows server 2003 service packs. Could that have changed things? I should add the error randomly comes and goes and has been doing so for today and yesterday. I leave it for 5 minutes and the error is gone. Server Error in '/' Application. Invalid URI: The hostname could not be parsed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UriFormatException: Invalid URI: The hostname could not be parsed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [UriFormatException: Invalid URI: The hostname could not be parsed.] System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) +5367536 System.Uri.CreateUri(Uri baseUri, String relativeUri, Boolean dontEscape) +31 System.Uri..ctor(Uri baseUri, String relativeUri) +34 System.Net.HttpWebRequest.CheckResubmit(Exception& e) +5300867 [WebException: Cannot handle redirect from HTTP/HTTPS protocols to other dissimilar ones.] System.Net.HttpWebRequest.GetResponse() +5314029 System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials) +69 System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials) +3929371 System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) +54 System.Xml.XmlTextReaderImpl.OpenUrlDelegate(Object xmlResolver) +74 System.Threading.CompressedStack.runTryCode(Object userData) +70 System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) +0 System.Threading.CompressedStack.Run(CompressedStack compressedStack, ContextCallback callback, Object state) +108 System.Xml.XmlTextReaderImpl.OpenUrl() +186 System.Xml.XmlTextReaderImpl.Read() +208 System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) +112 System.Xml.XmlDocument.Load(XmlReader reader) +108 System.Web.UI.WebControls.XmlDataSource.PopulateXmlDocument(XmlDocument document, CacheDependency& dataCacheDependency, CacheDependency& transformCacheDependency) +303 System.Web.UI.WebControls.XmlDataSource.GetXmlDocument() +153 System.Web.UI.WebControls.XmlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +29 System.Web.UI.WebControls.BaseDataList.GetData() +39 System.Web.UI.WebControls.DataList.CreateControlHierarchy(Boolean useDataSource) +264 System.Web.UI.WebControls.BaseDataList.OnDataBinding(EventArgs e) +55 System.Web.UI.WebControls.BaseDataList.DataBind() +75 System.Web.UI.WebControls.BaseDataList.EnsureDataBound() +55 System.Web.UI.WebControls.BaseDataList.CreateChildControls() +65 System.Web.UI.Control.EnsureChildControls() +97 System.Web.UI.Control.PreRenderRecursiveInternal() +53 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4588

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • FullCalendar events from asp.net ASHX page not displaying

    - by Steve Howard
    Hi I have been trying to add some events to the fullCalendar using a call to a ASHX page using the following code. Page script: <script type="text/javascript"> $(document).ready(function() { $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month, agendaWeek,agendaDay' }, events: 'FullCalendarEvents.ashx' }) }); c# code: public class EventsData { public int id { get; set; } public string title { get; set; } public string start { get; set; } public string end { get; set; } public string url { get; set; } public int accountId { get; set; } } public class FullCalendarEvents : IHttpHandler { private static List<EventsData> testEventsData = new List<EventsData> { new EventsData {accountId = 0, title = "test 1", start = DateTime.Now.ToString("yyyy-MM-dd"), id=0}, new EventsData{ accountId = 1, title="test 2", start = DateTime.Now.AddHours(2).ToString("yyyy-MM-dd"), id=2} }; public void ProcessRequest(HttpContext context) { context.Response.ContentType = "application/json."; context.Response.Write(GetEventData()); } private string GetEventData() { List<EventsData> ed = testEventsData; StringBuilder sb = new StringBuilder(); sb.Append("["); foreach (var data in ed) { sb.Append("{"); sb.Append(string.Format("id: {0},", data.id)); sb.Append(string.Format("title:'{0}',", data.title)); sb.Append(string.Format("start: '{0}',", data.start)); sb.Append("allDay: false"); sb.Append("},"); } sb.Remove(sb.Length - 1, 1); sb.Append("]"); return sb.ToString(); } } The ASHX page gets called and returnd the following data: [{id: 0,title:'test 1',start: '2010-06-07',allDay: false},{id: 2,title:'test 2',start: '2010-06-07',allDay: false}] The call to the ASHX page does not display any results, but if I paste the values returned directly into the events it displays correctly. I am I have been trying to get this code to work for a day now and I can't see why the events are not getting set. Any help or advise on how I can get this to work would be appreciated. Steve

    Read the article

  • NHibernate.NHibernateException: Unable to locate row for retrieval of generated properties: [MyNames

    - by Brad Heller
    It looks like all of my mappings are compiling correctly, I'm able to validly get a session from session factory. However, when I try to ISession.SaveOrUpdate(obj); I get this. Can anyone please help point me in the right direction? private Configuration configuration; protected Configuration Configuration { get { configuration = configuration ?? GetNewConfiguration(); return configuration; } } protected ISession GetNewSession() { var sessionFactory = Configuration.BuildSessionFactory(); var session = sessionFactory.OpenSession(); return session; } [TestMethod] public void TestSessionSave() { var company = new Company(); company.Name = "Test Company 1"; DateTime savedAt = DateTime.Now; using (var session = GetNewSession()) { try { session.SaveOrUpdate(company); } catch (Exception e) { throw e; } } Assert.IsTrue((company.CreationDate != null && company.CreationDate > savedAt), "Company was saved, creation date was set."); } For those that might be interested, here is my mapping for this class: <!-- Company --> <class name="MyNamespace.Company,MyLibrary" table="Companies"> <id name="Id" column="Id"> <generator class="native" /> </id> <property name="ExternalId" column="GUID" generated="insert" /> <property name="Name" column="Name" type="string" /> <property name="CreationDate" column="CreationDate" generated="insert" /> <property name="UpdatedDate" column="UpdatedDate" generated="always" /> </class> <!-- End Company --> Finally, here is my config -- I'm just connecting to a SQL Server CE instance for this for testing purposes. <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory name=""> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlServerCeDriver</property> <property name="connection.connection_string">Data Source="D:\Build\MyProject\Source\MyProject.UnitTests\MyProject.TestDatabase.sdf"</property> <property name="dialect">NHibernate.Dialect.MsSqlCeDialect</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> </session-factory> </hibernate-configuration> Thanks!

    Read the article

  • ActionScript/Flex ArrayCollection of Number objects to Java Collection<Long> using BlazeDS

    - by Justin
    Hello, I am using Flex 3 and make a call through a RemoteObject to a Java 1.6 method and exposed with BlazeDS and Spring 2.5.5 Integration over a SecureAMFChannel. The ActionScript is as follows (this code is an example of the real thing which is on a separate dev network); import com.adobe.cairngorm.business.ServiceLocator; import mx.collections.ArrayCollection; import mx.rpc.remoting.RemoteObject; import mx.rpc.IResponder; public class MyClass implements IResponder { private var service:RemoteObject = ServiceLocator.getInstance().getRemoteOjbect("mySerivce"); public MyClass() { [ArrayElementType("Number")] private var myArray:ArrayCollection; var id1:Number = 1; var id2:Number = 2; var id3:Number = 3; myArray = new ArrayCollection([id1, id2, id3]); getData(myArray); } public function getData(myArrayParam:ArrayCollection):void { var token:AsyncToken = service.getData(myArrayParam); token.addResponder(this.responder); //Assume responder implementation method exists and works } } This will make a call, once created to the service Java class which is exposed through BlazeDS (assume the mechanics work because they do for all other calls not involving Collection parameters). My Java service class looks like this; public class MySerivce { public Collection<DataObjectPOJO> getData(Collection<Long> myArrayParam) { //The following line is never executed and throws an exception for (Long l : myArrayParam) { System.out.println(l); } } } The exception that is thrown is a ClassCastException saying that a java.lang.Integer cannot be cast to a java.lang.Long. I worked around this issue by looping through the collection using Object instead, checking to see if it is an Integer, cast it to one, then do a .longValue() on it then add it to a temp ArraList. Yuk. The big problem is my application is supposed to handle records in the billions from a DB and the id will overflow the 2.147 billion limit of an integer. I would love to have BlazeDS or the JavaAdapter in it, translate the ActionScript Number to a Long as specified in the method. I hate that even though I use the generic the underlying element type of the collection is an Integer. If this was straight Java, it wouldn't compile. Any ideas are appreciated. Solutions are even better! :)

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • IE7 not digesting JSON: "parse error" [resolved]

    - by Kenny Leu
    While trying to GET a JSON, my callback function is NOT firing. $.ajax({ type:"GET", dataType:'json', url: myLocalURL, data: myData, success: function(returned_data){alert('success');} }); The strangest part of this is that my JSON(s) validates on JSONlint this ONLY fails on IE7...it works in Safari, Chrome, and all versions of Firefox, (EDIT: and even in IE8). If I use 'error', then it reports "parseError"...even though it validates! Is there anything that I'm missing? Does IE7 not process certain characters, data structures (my data doesn't have anything non-alphanumeric, but it DOES have nested JSONs)? I have used tons of other AJAX calls that all work (even in IE7), but with the exception of THIS call. An example data return (EDIT: This is a structurally-complete example, meaning it is only missing a few second-tier fields, but follows this exact hierarchy)here is: {"question":{ "question_id":"19", "question_text":"testing", "other_crap":"none" }, "timestamp":{ "response":"answer", "response_text":"the text here" } } I am completely at a loss. Hopefully someone has some insight into what's going on...thank you! EDIT Here's a copy of the SIMPLEST case of dummy data that I'm using...it still doesn't work in IE7. { "question":{ "question_id":"20", "question_text":"testing :", "adverse_party":"none", "juris":"California", "recipients":"Carl Chan" } } EDIT 2 I am starting to doubt that it is a JSON issue...but I have NO idea what else it could be. Here are some other resources that I've found that could be the cause, but they don't seem to work either: http://firelitdesign.blogspot.com/2009/07/jquerys-getjson.html (Django uses Unicode by default, so I don't think this is causing it) Anybody have any other ideas? ANSWER I finally managed to figure it out...mostly via tedious trial-and-error. I want to thank everyone for their suggestions...as soon as I have 15 rep, I'll upvote you, I promise. :) There was basically no way that you guys could have figured it out, because the issue turned out to be a strange bug between IE7 and Django (my research didn't bring up any similar issues). We were basically using Django template language to generate our JSON...and in the midst of this particular JSON, we were using custom template tags: {% load customfilter %} { "question":{ "question_id":"{{question.id}}", "question_text":"{{question.question_text|customfilterhere}}" } } As soon as I deleted anything related to the customfilter, IE7 was able to parse the JSON perfectly! We still don't have a workaround yet, but at least we now know what's causing it. Has anyone seen any similar issues? Once again, thank you everyone for your contributions.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

< Previous Page | 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561  | Next Page >