Search Results

Search found 5718 results on 229 pages for 'apply'.

Page 175/229 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • Draw a column graph with no space between columns

    - by Andrew Shepherd
    I am using the WPF toolkit, and am trying to render a graph that looks like a histogram. In particular, I want each column to be right up against each other column. There should be no gaps between columns. There are a number of components that you apply when creating a column graph. (See example XAML below). Does anybody know if there is a property you can set on one of the elements which refers to the width of the white space between columns? <charting:Chart Height="600" Width="Auto" HorizontalAlignment="Stretch" Name="MyChart" Title="Column Graph" LegendTitle="Legend"> <charting:ColumnSeries Name="theColumnSeries" Title="Series A" IndependentValueBinding="{Binding Path=Name}" DependentValueBinding="{Binding Path=Population}" Margin="0" > </charting:ColumnSeries> <charting:Chart.Axes> <charting:LinearAxis Orientation="Y" Minimum="200000" Maximum="2500000" ShowGridLines="True" /> <charting:CategoryAxis Name="chartCategoryAxis" /> </charting:Chart.Axes> </charting:Chart>

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • Working with Resources in WPF

    - by Coesy
    I am wanting to use the example from http://blogs.microsoft.co.il/blogs/tomershamam/archive/2008/09/22/lt-howto-gt-replace-listview-columns-with-rows-lt-howto-gt.aspx However I don't want to put this into the App.xaml code as this will apply to ALL gridviews, how do I apply this example to a select few gridviews in the application? The Resources look like this <Style TargetType="{x:Type GridViewHeaderRowPresenter}"> <Setter Property="Height" Value="80" /> <Setter Property="LayoutTransform"> <Setter.Value> <TransformGroup> <RotateTransform Angle="-90" /> <ScaleTransform ScaleY="-1" /> </TransformGroup> </Setter.Value> </Setter> </Style> <Style TargetType="{x:Type GridViewRowPresenter}"> <Setter Property="LayoutTransform"> <Setter.Value> <TransformGroup> <RotateTransform Angle="-90" /> <ScaleTransform ScaleY="-1" /> </TransformGroup> </Setter.Value> </Setter> </Style> <LinearGradientBrush x:Key="GridViewColumnHeaderBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="#FFFFFFFF" Offset="0"/> <GradientStop Color="#FFFFFFFF" Offset="0.4091"/> <GradientStop Color="#FFF7F8F9" Offset="1"/> </LinearGradientBrush> <LinearGradientBrush x:Key="GridViewColumnHeaderBorderBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="#FFF2F2F2" Offset="0"/> <GradientStop Color="#FFD5D5D5" Offset="1"/> </LinearGradientBrush> <LinearGradientBrush x:Key="GridViewColumnHeaderHoverBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="#FFBDEDFF" Offset="0"/> <GradientStop Color="#FFB7E7FB" Offset="1"/> </LinearGradientBrush> <LinearGradientBrush x:Key="GridViewColumnHeaderPressBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="#FF8DD6F7" Offset="0"/> <GradientStop Color="#FF8AD1F5" Offset="1"/> </LinearGradientBrush> <Style x:Key="GridViewColumnHeaderGripper" TargetType="{x:Type Thumb}"> <Setter Property="Canvas.Right" Value="-9"/> <Setter Property="Width" Value="18"/> <Setter Property="Height" Value="{Binding Path=ActualHeight, RelativeSource={RelativeSource TemplatedParent}}"/> <Setter Property="Padding" Value="0"/> <Setter Property="Background" Value="{StaticResource GridViewColumnHeaderBorderBackground}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Thumb}"> <Border Background="Transparent" Padding="{TemplateBinding Padding}"> <Rectangle Fill="{TemplateBinding Background}" HorizontalAlignment="Center" Width="1"/> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="{x:Type GridViewColumnHeader}"> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="Background" Value="{StaticResource GridViewColumnHeaderBackground}"/> <Setter Property="BorderBrush" Value="{StaticResource GridViewColumnHeaderBorderBackground}"/> <Setter Property="BorderThickness" Value="0"/> <Setter Property="Padding" Value="2,0,2,0"/> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GridViewColumnHeader}"> <Grid SnapsToDevicePixels="true"> <Border x:Name="HeaderBorder" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="0,1,0,1"> <Grid> <Grid.RowDefinitions> <RowDefinition MaxHeight="7"/> <RowDefinition/> </Grid.RowDefinitions> <Rectangle Fill="#FFE3F7FF" x:Name="UpperHighlight" Visibility="Collapsed"/> <Border Grid.RowSpan="2" Padding="{TemplateBinding Padding}"> <ContentPresenter HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="0,0,0,1" x:Name="HeaderContent" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" RecognizesAccessKey="True"> <ContentPresenter.LayoutTransform> <TransformGroup> <ScaleTransform ScaleY="-1" /> <RotateTransform Angle="90" /> </TransformGroup> </ContentPresenter.LayoutTransform> </ContentPresenter> </Border> </Grid> </Border> <Border Margin="1,1,0,0" x:Name="HeaderHoverBorder" BorderThickness="1,0,1,1"/> <Border Margin="1,0,0,1" x:Name="HeaderPressBorder" BorderThickness="1,1,1,0"/> <Canvas> <Thumb x:Name="PART_HeaderGripper" Style="{StaticResource GridViewColumnHeaderGripper}"/> </Canvas> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="true"> <Setter Property="Background" TargetName="HeaderBorder" Value="{StaticResource GridViewColumnHeaderHoverBackground}"/> <Setter Property="BorderBrush" TargetName="HeaderHoverBorder" Value="#FF88CBEB"/> <Setter Property="Visibility" TargetName="UpperHighlight" Value="Visible"/> <Setter Property="Background" TargetName="PART_HeaderGripper" Value="Transparent"/> </Trigger> <Trigger Property="IsPressed" Value="true"> <Setter Property="Background" TargetName="HeaderBorder" Value="{StaticResource GridViewColumnHeaderPressBackground}"/> <Setter Property="BorderBrush" TargetName="HeaderHoverBorder" Value="#FF95DAF9"/> <Setter Property="BorderBrush" TargetName="HeaderPressBorder" Value="#FF7A9EB1"/> <Setter Property="Visibility" TargetName="UpperHighlight" Value="Visible"/> <Setter Property="Fill" TargetName="UpperHighlight" Value="#FFBCE4F9"/> <Setter Property="Visibility" TargetName="PART_HeaderGripper" Value="Hidden"/> <Setter Property="Margin" TargetName="HeaderContent" Value="1,1,0,0"/> </Trigger> <Trigger Property="Height" Value="Auto"> <Setter Property="MinHeight" Value="20"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="Role" Value="Floating"> <Setter Property="Opacity" Value="0.4082"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GridViewColumnHeader}"> <Canvas x:Name="PART_FloatingHeaderCanvas"> <Rectangle Fill="#FF000000" Width="{TemplateBinding ActualWidth}" Height="{TemplateBinding ActualHeight}" Opacity="0.4697"/> </Canvas> </ControlTemplate> </Setter.Value> </Setter> </Trigger> <Trigger Property="Role" Value="Padding"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GridViewColumnHeader}"> <Border x:Name="HeaderBorder" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="0,1,0,1"/> <ControlTemplate.Triggers> <Trigger Property="Height" Value="Auto"> <Setter Property="MinHeight" Value="20"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> I have tried creating a usercontrol and sticking that lot in the UserControl.Resources section but it didn't work, I can only get this example to work if i put them into the Application.Resources section which i obviously don't want. Help!! :-)

    Read the article

  • Multiple SFINAE rules

    - by Fred
    Hi everyone, After reading the answer to this question, I learned that SFINAE can be used to choose between two functions based on whether the class has a certain member function. It's the equivalent of the following, just that each branch in the if statement is split into an overloaded function: template<typename T> void Func(T& arg) { if(HAS_MEMBER_FUNCTION_X(T)) arg.X(); else //Do something else because T doesn't have X() } becomes template<typename T> void Func(T &arg, int_to_type<true>); //T has X() template<typename T> void Func(T &arg, int_to_type<false>); //T does not have X() I was wondering if it was possible to extend SFINAE to do multiple rules. Something that would be the equivalent of this: template<typename T> void Func(T& arg) { if(HAS_MEMBER_FUNCTION_X(T)) //See if T has a member function X arg.X(); else if(POINTER_DERIVED_FROM_CLASS_A(T)) //See if T is a pointer to a class derived from class A arg->A_Function(); else if(DERIVED_FROM_CLASS_B(T)) //See if T derives from class B arg.B_Function(); else if(IS_TEMPLATE_CLASS_C(T)) //See if T is class C<U> where U could be anything arg.C_Function(); else if(IS_POD(T)) //See if T is a POD type //Do something with a POD type else //Do something else because none of the above rules apply } Is something like this possible? Thank you.

    Read the article

  • WCF Message Debugging on WebHttpBehavior

    - by Programming Hero
    I've created a custom binding in WCF for a custom MessageEncoder to allow messages to be written as XML using a wider range of encodings than WCF supports out of the box. The encoder appears to be working and I am able to send and receive messages, but I want to verify that the XML message being written is exactly as required by the service I am trying to consume. I've turned on message logging for WCF using the diagnostic trace listeners to output the messages sent and received over the wire to a log file. Unfortunately, for calls using my encoder, the message is displayed as ... stream ... EDIT: I don't think it's anything to do with my custom encoding. I have experimented with my custom binding a little, switching to using the built-in text encoding and http transport. I still don't get a message body logged in the message trace. EDIT2: Having done further investigation, the issue looks to be related not to the custom binding, but the custom behaviour. I'm apply the <webHttp/> behaviour. Once this is specified (along with manual addressing), the tracing behaviour shows up. Is this a known issue with WebHttpBehavior? Am I still barking up the wrong tree?

    Read the article

  • Integrating a Custom Compiler with the Visual Studio IDE

    - by M.A. Hanin
    Background: I want to create a custom VB compiler, extending the "original" compiler, to handle my custom compile-time attributes. Question: after I've created my custom compiler and I've got an executable file capable of compiling VB code via the standard command-line interface, how do I integrate this compiler with the Visual Studio IDE? (such that pressing "compile" or "build" will make use of my compiler instead of the default compiler). EDIT: (Correct me if i'm wrong) From the reactions here, I see this question is a bit shocking, so I shall further explain my needs and background: .NET provides us with a great mechanism called Attributes. As far as I understand, making attributes apply their intended behavior upon the attributed element (assembly, module, class, method, etc.) - attributes must be reflected upon. So the real trick here is reflecting and applying behavior at the right spot. Lets take Serialization for example: We decorate a class with the Serializable attribute. We then pass an instance of the class to the formatter's Serialize method. The formatter reflects upon the instance, checking if it has the Serializable attribute, and acting accordingly. Now, if we examine the Synchronization, Flags, Obsolete and CLSCompliant attributes, then the real question is: who reflects upon them? At least in some cases, it has to be the compiler (and/or IDE). Therefore, it seems that if I wish to create custom attributes that change an element's behavior regardless of any specific consumer, i must extend the compiler to reflect upon them at compilation. Of course, these are not my personal insights: the book "Applied .NET Attributes" provides a complete example of creating a custom attribute and a custom C# compiler to reflect upon that attribute at compilation (the example is used to implement "java-style checked exceptions").

    Read the article

  • How do you marshall a parameterized type with JAX-WS / JAXB?

    - by LES2
    Consider the following classes (please assume public getter and setter methods for the private fields). // contains a bunch of properties public abstract class Person { private String name; } // adds some properties specific to teachers public class Teacher extends Person { private int salary; } // adds some properties specific to students public class Student extends Person { private String course; } // adds some properties that apply to an entire group of people public class Result<T extends Person> { private List<T> group; private String city; // ... } We might have the following web service implementation annotated as follows: @WebService public class PersonService { @WebMethod public Result<Teacher> getTeachers() { ... } @WebMethod public Result<Student> getStudents() { ... } } The problem is that JAXB appears to marshall the Result object as a Result<Person> instead of the concrete type. So the Result returned by getTeachers() is serialized as containing a List<Person> instead of List<Teacher>, and the same for getStudents(), mutatis mutandis. Is this the expected behavior? Do I need to use @XmlSeeAlso on Person? Thanks! LES

    Read the article

  • Programmatically Set Proxy Address, Port, User, Password throught Windows Registry

    - by Fábio Antunes
    I'm writing a small C# application that will use Internet Explorer to interact with a couple a websites, with help from WatiN. However, it will also require from time to time to use a proxy. I've came across Programmatically Set Browser Proxy Settings in C#, but this only enables me to enter a proxy address, and I also need to enter a Proxy username and password. How can I do that? Note: It doesn't matter if a solution changes the entire system Internet settings. However, I would prefer to change only IE proxy settings (for any connection). The solution has to work with IE8 and Windows XP SP3 or higher. I'd like to have the possibility to read the Proxy settings first, so that later I can undo my changes. EDIT With the help of the Windows Registry accessible through Microsoft.Win32.RegistryKey, i was able to apply a proxy something like this: RegistryKey registry = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true); registry.SetValue("ProxyEnable", 1); registry.SetValue("ProxyServer", "127.0.0.1:8080"); But how can i specify a username and a password to login at the proxy server? I also noticed that IE doesn't refresh the Proxy details for its connections once the registry was changed how can i order IE to refresh its connection settings from the registry? Thanks

    Read the article

  • Can a WCF Service provide publish/subscribe activity to a Linux-based C++ client application?

    - by Jeremy Roddingham
    I have a WCF service written to provide certain functionality to intranet-based clients. This is easy when a client is running Windows. I want to implement the same functionality for my Windows clients that is available to my linux clients. My questions are? How can I communicate to a linux c++ based client (supporting callback operations for a publish subscribe) type situation? I am aware of using SOAP over the HTTPBinding but is that the only way (does not support callbacks I believe)? Would the same apply if I were using TCPBinding on the service-side? Currently, the service is set up using TCP but what are my options for the linux client communcation? I read somewhere that messages can also be sent (via webservices I believe) in XML rather than SOAP? Which would be a better approach or how to determine which is a better approach? I am trying to understand the options I would have for a WCF data service if I wanted to communicate with it from a linux client. I appreciate all your help. Thank You, Jeremy

    Read the article

  • Replacing a Namespace with XSLT

    - by er4z0r
    Hi I want to work around a 'bug' in certain RSS-feeds, which use an incorrect namespace for the mediaRSS module. I tried to do it by manipulating the DOM programmatically, but using XSLT seems more flexible to me. Example: <media:thumbnail xmlns:media="http://search.yahoo.com/mrss" url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> <media:thumbnail url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> Where the namespace must be http://search.yahoo.com/mrss/ (mind the slash). This is my stylesheet: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//*[namespace-uri()='http://search.yahoo.com/mrss']"> <xsl:element name="{local-name()}" namespace="http://search.yahoo.com/mrss/" > <xsl:apply-templates select="@*|*|text()" /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the result of the transformation is an invalid XML and my RSS-Parser (ROME Library) does not parse the feed anymore: java.lang.IllegalStateException: Root element not set at org.jdom.Document.getRootElement(Document.java:218) at com.sun.syndication.io.impl.RSS090Parser.isMyType(RSS090Parser.java:58) at com.sun.syndication.io.impl.FeedParsers.getParserFor(FeedParsers.java:72) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:273) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:251) ... 8 more What is wrong with my stylesheet?

    Read the article

  • How to use jquery+thickbox open the SubForm, and pass the value to parent form

    - by Curie
    like this example: aa.jsp(parent form) <head> <script language="javascript" type="text/javascript" src="<%=basePath%>js/jquery-1.7.2.min.js"></script> <script language="javascript" type="text/javascript" src="<%=basePath%>js/thickbox.js"></script> <link rel="stylesheet" type="text/css" href="<%=basePath%>js/thickbox.css"> </head> <body> <a href="bb.jsp?TB_iframe=true&placeValuesBeforeTB_=savedValues&height=400&width=170" title="input value" class="thickbox"> <input type="text" name="txtA" id="txtA"/> </a> </body> I use the js of jquery and thickbox in the head. And apply them. Click txt textbox to open sub page. bb.jsp(sub form): <body> <input type="text" name="txtB" id="txtB"> </body> There is one textbox txtB in sub page. How could I type the context in textbox txtB of bb.jsp, and pass the context to parent page of aa.jsp. Then the context will be diaplayed in textbox txtA in parent page.

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • Getting browser to make an AJAX call ASAP, while page is still loading

    - by Chris
    I'm looking for tips on how to get the browser to kick off an AJAX call as soon as possible into processing a web page, ideally before the page is fully downloaded. Here's my approximate motivation: I have a web page that takes, say, 5 seconds to load. It calls a web service that takes, say, 10 seconds to load. If loading the page and calling the web service happened sequentially, the user would have to wait 15 seconds to see all the information. However, if I can get the web service call started before the 5 second page load is complete, then at least some of the work can happened in parallel. Ideally I'd like as much of the work to happen in parallel as possible. My initial theory was that I should place the AJAX-calling javascript as high up as possible in the web page HTML source (being mindful of putting it after the jquery.js include, because I'm making the call using jquery ajax). I'm also being mindful not to wrap the AJAX call in a jquery ready event handler. (I mention this because ready events are popular in a lot of jquery example code.) However, the AJAX call still doesn't seem to get kicked off as early as I'm hoping (at least as judged by the Google Chrome "Timeline" feature), so I'm wondering what other considerations apply. One thing that might potentially be detrimental is that the AJAX call is back to the same web server that's serving the original web page, so I might be in danger of hitting a browser limit on the # of HTTP connections back to that one machine? (The HTML page loads a number of images, css files, etc..)

    Read the article

  • How to enable OutputCache with an IHttpHandler

    - by Joseph Kingry
    I have an IHttpHandler that I would like to hook into the OutputCache support so I can offload cached data to the IIS kernel. I know MVC must do this somehow, I found this in OutputCacheAttribute: public override void OnResultExecuting(ResultExecutingContext filterContext) { if (filterContext == null) { throw new ArgumentNullException("filterContext"); } // we need to call ProcessRequest() since there's no other way to set the Page.Response intrinsic OutputCachedPage page = new OutputCachedPage(_cacheSettings); page.ProcessRequest(HttpContext.Current); } private sealed class OutputCachedPage : Page { private OutputCacheParameters _cacheSettings; public OutputCachedPage(OutputCacheParameters cacheSettings) { // Tracing requires Page IDs to be unique. ID = Guid.NewGuid().ToString(); _cacheSettings = cacheSettings; } protected override void FrameworkInitialize() { // when you put the <%@ OutputCache %> directive on a page, the generated code calls InitOutputCache() from here base.FrameworkInitialize(); InitOutputCache(_cacheSettings); } } But not sure how to apply this to an IHttpHandler. Tried something like this, but of course this doesn't work: public class CacheTest : IHttpHandler { public void ProcessRequest(HttpContext context) { OutputCacheParameters p = new OutputCacheParameters { Duration = 3600, Enabled = true, VaryByParam = "none", Location = OutputCacheLocation.Server }; OutputCachedPage page = new OutputCachedPage(p); page.ProcessRequest(context); context.Response.ContentType = "text/plain"; context.Response.Write(DateTime.Now.ToString()); context.Response.End(); } public bool IsReusable { get { return true; } } }

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • How do I create Document Fragments with Nokogiri?

    - by viatropos
    I have an html document like this: <div class="something"> <textarea name="another"/> <div class="nested"> <label>Nested Label</label> <input name="nested_input"/> </div> </div> I have gone through and modified some of the html tree by building it into a Nokogiri::HTML::Document like so: html = Nokogiri::HTML(IO.read("test.html")) html.children.each do ... Now I want to be able to extract the nested part into a document so I can apply a stylesheet to it, or so I can manipulate it as if it were like a Rails partial. Something like this: fragment = Nokogiri::HTML(html.xpath("//div[@class='nested']).first) Is there a way to do that? Such a way that when I output it, it doesn't wrap it in<html> tags and turn it into an HTML document, I just want HTML, no document. Is this possible?

    Read the article

  • Bind postback data from a strong type view of type List<T>

    - by Robert Koritnik
    I have a strong type view of type List<List<MyViewModelClass>> The outer list will always have two lists of List<MyViewModelClass>. For each of the two outer lists I want to display a group of checkboxes. Each set can have an arbitrary number of choices. My view model class looks similar to this: public class MyViewModelClass { public Area Area { get; set; } public bool IsGeneric { get; set; } public string Code { get; set; } public bool IsChecked { get; set; } } So the final view will look something like: Please select those that apply: First set of choices: x Option 1 x Option 2 x Option 3 etc. Second set of choices: x Second Option 1 x Second Option 2 x Second Option 3 x Second Option 4 etc. Checkboxes should display MyViewModelClass.Area.Name, and their value should be related to MyViewModelClass.Area.Id. Checked state is of course related to MyViewModel.IsChecked. Question I wonder how should I use Html.CheckBox() or Html.CheckBoxFor() helper to display my checkboxes? I have to get these values back to the server on a postback of course. I would like to have my controller action like one of these: public ActionResult ConsumeSelections(List<List<MyViewModelClass>> data) { // process data } public ActionResult ConsumeSelections(List<MyViewModelClass> first, List<MyViewModelClass> second) { // process data } If it makes things simpler, I could make a separate view model type like: public class Options { public List First { get; set; } public List Second { get; set; } } As well as changing my first version of controller action to: public ActionResult ConsumeSelections(Options data) { // process data }

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • Is there a more useful explanation for UITableViewStylePlain?

    - by mystify
    From the docs: In the plain style, section headers and footers float above the content if the part of a complete section is visible. A table view can have an index that appears as a bar on the right hand side of the table (for example, "a" through "z"). You can touch a particular label to jump to the target section. I find that very hard to grasp. First, this one: if the part of a complete section is visible What do they mean by this? This is paradox. Which one is it? A) Table must be exactly the height of that section. If I have 5 Rows, and each row is 50px high, I must make it 5*50 high. The full section must be visible on the screen. Otherwise, if I have 100 rows but my table view is only 400 high, this will not apply. Nothing will float above my content. Sounds wrong. B) It doesn't matter how high my table view actually is. Header and Footer is floating above the content and I can scroll the section. Makes more sense. But is completely against this nonsense making sentence: 'if the part of a complete section is visible' Can anyone explain it better than they did?

    Read the article

  • Error CS0117: Namespace.A does not contain definition for Interface..

    - by SnOrfus
    I'm getting the error: 'Namespace.A' does not contain a definition for 'MyObjectInterface' and no extension method 'MyObjectInterface' accepting a first argument of type ... I've looked at this and this and neither seems to apply. The code looks like: public abstract class Base { public IObject MyObjectInterface { get; set; } } public class A : Base { /**/ } public class Implementation { public void Method() { Base obj = new A(); obj.MyObjectInterface = /* something */; // Error here } } IObject is defined in a separate assembly, but: IObject is in a separate assembly/namespace Base and A are in the same assembly/namespace each with correct using directives Implementation is in a third separate assembly namespace, also with correct using directives. Casting to A before trying to set MyObjectInterface doesn't work Specifically, I'm trying to set the value of MyObjectInterface to a mock object (though, I created a fake instead to no avail) I've tried everything I can think of. Please help before I lose more hair. edit I can't reproduce the error by creating a test app either, which is why I'm here and why I'm frustrated. @Reed Copsey: /* something */ is either an NUnit.DynamicMock(IMailer).MockInstance or a Fake object I created that inherits from IObject and just returns canned values. @Preet Sangha: I checked and no other assembly that is referenced has a definition for an IObject (specifically, it's called an IMailer). Thing is that intellisense picks up the Property, but when I compile, I get CS0117. I can even 'Go To Definition' in the implementation, and it takes me to where I defined it.

    Read the article

  • Archive log transfer from Oracle 9i to Oracle 10g

    - by Jamie Love
    Hi all, I have a situation where I need to transfer Oracle 9i archive logs to an Oracle 10g database, from where they are to be mined by a log-miner and then used by an Oracle streams capture/apply processes. (Oracle 9 archive logs can be read by the Oracle 10 logminer - I can manually copy the archive logs across, manually register them and have them mined, captured then applied). The difficulty is that the way Oracle does archive log transfer changed quite a bit between 9i and 10g and setting up the 9i database to transfer to the remote machine like so: log_archive_dest_state_2 = enable log_archive_dest_2 = "service=OTHERMACHINE arch optional" no longer works. I get this in the 9i logs: *** 2009-05-22 04:03:44.149 RFS network connection lost at host 'OTHERMACHINE' Error 3113 attaching RFS server to standby instance at host 'OTHERMACHINE' Error 3113 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'OTHERMACHINE' Heartbeat failed to connect to standby 'OTHERMACHINE'. Error is 3113. *** 2009-05-22 04:03:44.150 kcrrfail: dest:2 err:3113 force:0 ORA-03113: end-of-file on communication channel And in the 10g log I get: Fri May 22 04:07:42 2009 WARNING: inbound connection timed out (ORA-3136) My question is: Does anyone know how I could configure my 9i or 10g server such that the 10g server will accept the 9i connection in such a way that I can transfer the 9i archive logs to the 10g server. It would be a bonus if the archive logs would be automatically registered in the 10g server. Note I have not set up a full DataGuard configuration here and the 10g database is not a secondary server. Thanks for any suggestions. Edit Note that I can log on to the 10g server from the 9i server via sqlplus, so connectivity is not the problem Edit 2 After a large amount of time searching for a solution, I've finally decided that such a mechanism doesn't work, and that a non-Oracle method of transferring archive logs from 9i to 10g will need to be used (e.g. rsync).

    Read the article

  • C# using consts in static classes

    - by NickLarsen
    I was plugging away on an open source project this past weekend when I ran into a bit of code that confused me to look up the usage in the C# specification. The code in questions is as follows: internal static class SomeStaticClass { private const int CommonlyUsedValue = 42; internal static string UseCommonlyUsedValue(...) { // some code value = CommonlyUsedValue + ...; return value.ToString(); } } I was caught off guard because this appears to be a non static field being used by a static function which some how compiled just fine in a static class! The specification states (§10.4): A constant-declaration may include a set of attributes (§17), a new modifier (§10.3.4), and a valid combination of the four access modifiers (§10.3.5). The attributes and modifiers apply to all of the members declared by the constant-declaration. Even though constants are considered static members, a constant-declaration neither requires nor allows a static modifier. It is an error for the same modifier to appear multiple times in a constant declaration. So now it makes a little more sense because constants are considered static members, but the rest of the sentence is a bit surprising to me. Why is it that a constant-declaration neither requires nor allows a static modifier? Admittedly I did not know the spec well enough for this to immediately make sense in the first place, but why was the decision made to not force constants to use the static modifier if they are considered constants? Looking at the last sentence in that paragraph, I cannot figure out if it is regarding the previous statement directly and there is some implicit static modifier on constants to begin with, or if it stands on its own as another rule for constants. Can anyone help me clear this up?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >