Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 344/375 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • Unable to get index from jQuery UI slider range

    - by Phil.Wheeler
    I'm having a hell of a time trying to get (what I thought was) a simple index from a collection of multiple sliders. The HTML is as follows: <div id="left-values" class="line"> <span id="l1" style="padding: 0 1.8em;">0</span> <span id="l2" style="padding: 0 1.8em;">0</span> <span id="l3" style="padding: 0 1.8em;">0</span> <span id="l4" style="padding: 0 1.8em;">0</span> <span id="l5" style="padding: 0 1.8em;">0</span> <span id="l6" style="padding: 0 1.8em;">0</span> <span id="l7" style="padding: 0 1.8em;">0</span> <span id="l8" style="padding: 0 1.8em;">0</span> </div> And the jQuery code is: // setup audiometry sliders $("#eq > span").each(function (e) { // read initial values from markup and remove that var value = parseInt($(this).text()); // var index = $(this).index; <- this didn't work. $(this).empty(); $(this).slider({ value: value, slide: function (event, ui) { //console.log($(this).attr('id')); <- neither did this. //console.log(index); $('#left-values span:first').text(ui.value); } }) }); The problem is that jQuery UI - when creating a slider - replaces the existing HTML with its own markup. This includes any ID values and, for whatever reason, I can't get the index for a given slider to surface either. So I'm running out of ideas.

    Read the article

  • How do I create/use a Fluent NHibernate convention to automap UInt32 properties to an SQL Server 200

    - by dommer
    I'm trying to use a convention to map UInt32 properties to a SQL Server 2008 database. I don't seem to be able to create a solution based on existing web sources, due to updates in the way Fluent NHibernate works - i.e. examples are out of date. I'm trying to have NHibernate generate the schema (via ExposeConfiguration). I'm happy to have NHibernate map it to anything sensible (e.g. bigint). Here's my code as it currently stands (which, when I try to expose the schema, fails due to SQL Server not supporting UInt32). Apologies for the code being a little long, but I'm not 100% sure what is relevant to the problem, so I'm erring on the side of caution. Most of it is based on this post. The error reported is: System.ArgumentException : Dialect does not support DbType.UInt32 I think I'll need a relatively comprehensive example, as I don't seem to be able to pull the pieces together into a working solution, at present. FluentConfiguration configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(connectionString)) .Mappings(mapping => mapping.AutoMappings.Add( AutoMap.AssemblyOf<Product>() .Conventions.Add<UInt32UserTypeConvention>())); configuration.ExposeConfiguration(x => new SchemaExport(x).Create(false, true)); namespace NHibernateTest { public class UInt32UserTypeConvention : UserTypeConvention<UInt32UserType> { // Empty. } } namespace NHibernateTest { public class UInt32UserType : IUserType { // Public properties. public bool IsMutable { get { return false; } } public Type ReturnedType { get { return typeof(UInt32); } } public SqlType[] SqlTypes { get { return new SqlType[] { SqlTypeFactory.Int32 }; } } // Public methods. public object Assemble(object cached, object owner) { return cached; } public object DeepCopy(object value) { return value; } public object Disassemble(object value) { return value; } public new bool Equals(object x, object y) { return (x != null && x.Equals(y)); } public int GetHashCode(object x) { return x.GetHashCode(); } public object NullSafeGet(IDataReader rs, string[] names, object owner) { int? i = (int?)NHibernateUtil.Int32.NullSafeGet(rs, names[0]); return (UInt32?)i; } public void NullSafeSet(IDbCommand cmd, object value, int index) { UInt32? u = (UInt32?)value; int? i = (Int32?)u; NHibernateUtil.Int32.NullSafeSet(cmd, i, index); } public object Replace(object original, object target, object owner) { return original; } } }

    Read the article

  • basic operations for modifying a source document with XSLT

    - by SpliFF
    All the tutorials and examples I've found of XSLT processing seem to assume your destination will be a significantly different format/structure to your source and that you know the structure of the source in advance. I'm struggling with finding out how to perform simple "in-place" modifications to a HTML document without knowing anything else about its existing structure. Could somebody show me a clear example that, given an arbitrary unknown HTML source will: 1.) delete the classname 'foo' from all divs 2.) delete a node if its empty (ie <p></p>) 3.) delete a <p> node if its first child is <br> 4.) add newattr="newvalue" to all H1 5.) replace 'heading' in text nodes with 'title' 6.) wrap all <u> tags in <b> tags (ie, <u>foo</u> -> <b><u>foo</u></b>) 7.) output the transformed document without changing anything else The above examples are the primary types of transform I wish to accomplish. Understanding how to do the above will go a long way towards helping me build more complex transforms. To help clarify/test the examples here is a sample source and output, however I must reiterate that I want to work with arbitrary samples without rewriting the XSLT for each source: <!doctype html> <html> <body> <h1>heading</h1> <p></p> <p><br>line</p> <div class="foo bar"><u>baz</u></div> <p>untouched</p> </body> </html> output: <!doctype html> <html> <body> <h1 newattr="newvalue">title</h1> <div class="bar"><b><u>baz</u></b></div> <p>untouched</p> </body> </html>

    Read the article

  • How to dynamically change the content of a facet in a custom component?

    - by romaintaz
    Hello, Let's consider that I want to extend an existing JSF component, such as the <rich:datatable/>. My main requirement is to dynamically modify the content of a <f:facet>, to change its content. What is the best way to achieve that? Or where is the best place in the code to achieve that? In my faces-config.xml, I have the following declaration: <faces-config> ... <component> <component-type>my.component.dataTable</component-type> <component-class>my.project.component.table.MyHtmlDataTable</component-class> </component> ... <render-kit> <render-kit-id>HTML_BASIC</render-kit-id> <renderer> <component-family>org.richfaces.DataTable</component-family> <renderer-type>my.renderkit.dataTable</renderer-type> <renderer-class>my.project.component.table.MyDataTableRenderer</renderer-class> </renderer> ... Also, my my-project.taglib.xml file (as I use Facelets) looks like: <facelet-taglib> <namespace>http://my.project/jsf</namespace> <tag> <tag-name>dataTable</tag-name> <component> <component-type>my.component.dataTable</component-type> <renderer-type>my.renderkit.dataTable</renderer-type> </component> </tag> So as you can see, I have two classes in my project for my custom datatable: MyHtmlDataTable and MyDataTableRenderer. One of my idea is to modify the content of the <f:facet> directly in the doEncodeBegin() method of my renderer. This is working (in fact almost working), but I don't really think that's the better place to achieve my modification. What do you think? Technical information: JSF 1.2, Facelets, Richfaces 3.3.2, Java 1.6

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

  • Simplifying const Overloading?

    - by templatetypedef
    Hello all- I've been teaching a C++ programming class for many years now and one of the trickiest things to explain to students is const overloading. I commonly use the example of a vector-like class and its operator[] function: template <typename T> class Vector { public: T& operator[] (size_t index); const T& operator[] (size_t index) const; }; I have little to no trouble explaining why it is that two versions of the operator[] function are needed, but in trying to explain how to unify the two implementations together I often find myself wasting a lot of time with language arcana. The problem is that the only good, reliable way that I know how to implement one of these functions in terms of the other is with the const_cast/static_cast trick: template <typename T> const T& Vector<T>::operator[] (size_t index) const { /* ... your implementation here ... */ } template <typename T> T& Vector<T>::operator[] (size_t index) { return const_cast<T&>(static_cast<const Vector&>(*this)[index]); } The problem with this setup is that it's extremely tricky to explain and not at all intuitively obvious. When you explain it as "cast to const, then call the const version, then strip off constness" it's a little easier to understand, but the actual syntax is frightening,. Explaining what const_cast is, why it's appropriate here, and why it's almost universally inappropriate elsewhere usually takes me five to ten minutes of lecture time, and making sense of this whole expression often requires more effort than the difference between const T* and T* const. I feel that students need to know about const-overloading and how to do it without needlessly duplicating the code in the two functions, but this trick seems a bit excessive in an introductory C++ programming course. My question is this - is there a simpler way to implement const-overloaded functions in terms of one another? Or is there a simpler way of explaining this existing trick to students? Thanks so much!

    Read the article

  • ajaxSubmit and Other Code. Can someone help me determine what this code is doing?

    - by Matt Dawdy
    I've inherited some code that I need to debug. It isn't working at present. My task is to get it to work. No other requirements have been given to me. No, this isn't homework, this is a maintenance nightmare job. ASP.Net (framework 3.5), C#, jQury 1.4.2. This project makes heavy use of jQuery and AJAX. There is a drop down on a page that, when an item is chosen, is supposed to add that item (it's a user) to an object in the database. To accomplish this, the previous programmer first, on page load, dynamically loads the entire page through AJAX. To do this, he's got 5 div's, and each one is loaded from a jquery call to a different full page in the website. Somehow, the HTML and BODY and all the other stuff is stripped out and the contents of the div are loaded with the content of the aspx page. Which seems incredibly wrong to me since it relies on the browser to magically strip out html, head, body, form tags and merge with the existing html head body form tags. Also, as the "content" page is returned as a string, the previous programmer has this code running on it before it is appended to the div: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } When the dropdown itself fires it's onchange event, here is the code that gets fired: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } This blows up in IE, with the message that "Microsoft JScript runtime error: Object doesn't support this property or method" -- which, I think, has to be that $(form).ajaxSubmit method doesn't exist. What is this code really trying to do? I am so turned around right now that I think my only option is to scrap everything and start over. But I'd rather not do that unless necessary. Is this code good? Is it working against .Net, and is that why we are having issues?

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • How can a C/C++ program put itself into background?

    - by Larry Gritz
    What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows). Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle. One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs. Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background. Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at. Edit #2: fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()." I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions. Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • NHibernate and objects with value-semantics

    - by Groo
    Problem: If I pass a class with value semantics (Equals method overridden) to NHibernate, NHibernate tries to save it to db even though it just saved an entity equal by value (but not by reference) to the database. What am I doing wrong? Here is a simplified example model for my problem: Let's say I have a Person entity and a City entity. One thread (producer) is creating new Person objects which belong to a specific existing City, and another thread (consumer) is saving them to a repository (using NHibernate as DAL). Since there is lot of objects being flushed at a time, I am using Guid.Comb id's to ensure that each insert is made using a single SQL command. City is an object with value-type semantics (equal by name only -- for this example purposes only): public class City : IEquatable<City> { public virtual Guid Id { get; private set; } public virtual string Name { get; set; } public virtual bool Equals(City other) { if (other == null) return false; return this.Name == other.Name; } public override bool Equals(object obj) { return Equals(obj as City); } public override int GetHashCode() { return this.Name.GetHashCode(); } } Fluent NH mapping is something like: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); References(x => x.City) .Cascade.SaveUpdate(); } } public class CityMap : ClassMap<City> { public CityMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); Map(x => x.Name); } } Right now (with my current NHibernate mapping config), my consumer thread maintains a dictionary of cities and replaces their references in incoming person objects (otherwise NHibernate will see a new, non-cached City object and try to save it as well), and I need to do it for every produced Person object. Since I have implemented City class to behave like a value type, I hoped that NHibernate would compare Cities by value and not try to save them each time -- i.e. I would only need to do a lookup once per session and not care about them anymore. Is this possible, and if yes, what am I doing wrong here?

    Read the article

  • Infinite Refresh Loop in Firefox 3.0

    - by Martin Gordon
    I'm having a strange issue with my Javascript in Firefox 3.0.x. In Firefox 3.0.12, the page constantly reloads as soon as the list body is loaded. Neither Firefox 3.5, Safari 4 nor Chrome 5 (all on Mac) experience this issue. EDIT: I've created an isolated example rather than pulling this from my existing code. test.js function welcomeIndexOnLoad() { $("#options a").live('click', function () { optionClicked($(this), "get_list_body.html"); return false; }); $(document).ready(function() { optionClicked(null, "get_list_body.html"); }); } function optionClicked(sender, URL) { queryString = ""; if (sender != null) { queryString = $(sender).attr("rel"); } $("#list_body").load(URL + "?" + queryString, function(resp, status, AJAXReq) { console.log(resp); console.log("" + status); location.hash = queryString; }); }? test.html <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js"></script> <script type="text/javascript" src="test.js"></script> <script> welcomeIndexOnLoad(); </script> </head> <body> <div id="container"> Outside of list body. <div id="list_body"> </div> </div> </body> </html> get_list_body.html <h3> <div id="options"> <a href="#" rel="change_list">Change List</a> </div> <ul> <li>li</li> </ul> jQuery line 5252 (an xhr.send() call) shows up in the console as soon as the page reloads: xhr.send( type === "POST" || type === "PUT" || type === "DELETE" ? s.data : null );

    Read the article

  • How to assign WPF resources to other resource tags

    - by Tom
    This is quite obscure, I may just be missing something extremely simple. Scenario 1 Lets say I create a gradient brush, like this in my <Window.Resources> section: <LinearGradientBrush x:Key="GridRowSelectedBackBrushGradient" StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="#404040" Offset="0.0" /> <GradientStop Color="#404040" Offset="0.5" /> <GradientStop Color="#000000" Offset="0.6" /> <GradientStop Color="#000000" Offset="1.0" /> </LinearGradientBrush> Then much later on, I want to override the HighlightBrushKey for a DataGrid. I have basically done it like this (horrible); <LinearGradientBrush x:Key="{x:Static SystemColors.HighlightBrushKey}" GradientStops="{Binding Source={StaticResource GridRowSelectedBackBrushGradient}, Path=GradientStops}" StartPoint="{Binding Source={StaticResource GridRowSelectedBackBrushGradient}, Path=StartPoint}" EndPoint="{Binding Source={StaticResource GridRowSelectedBackBrushGradient}, Path=EndPoint}" /> This is obviously not the most slick way of referencing a resource. I also came up with the following problem, which is almost identical. Scenario 2 Say I created two colors in my <Window.Resources> markup, like so: <SolidColorBrush x:Key="DataGridRowBackgroundBrush" Color="#EAF2FB" /> <SolidColorBrush x:Key="DataGridRowBackgroundAltBrush" Color="#FFFFFF" /> Then later on, I want to supply them in an Array, which feeds the ConverterParameter on a Binding so I can supply the custom Converter with my static resource instances: <Setter Property="Background"> <Setter.Value> <Binding RelativeSource="{RelativeSource Mode=Self}" Converter="{StaticResource BackgroundBrushConverter}"> <Binding.ConverterParameter> <x:Array Type="{x:Type Brush}"> <SolidColorBrush Color="{Binding Source={StaticResource DataGridRowBackgroundBrush}, Path=Color}" /> <SolidColorBrush Color="{Binding Source={StaticResource DataGridRowBackgroundAltBrush}, Path=Color}" /> </x:Array> </Binding.ConverterParameter> </Binding> </Setter.Value> </Setter> What I've done is attempt to rereference an existing resource, but in my efforts I've actually recreated the resource, and bound the properties so they match. Again, this is not ideal. Because I've now hit this problem at least twice, is there a better way? Thanks, Tom

    Read the article

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • Add class to elements which already have a class

    - by bwstud
    I have a group of divs which I'm dynamically generating when a button is clicked with the class, "brick". This gives them dimension and starting position of top: 0. I'm trying to get them to animate to the bottom of the view using a css transition with a second class assignment which gives them a bottom position: 0;. Can't figure out the syntax for adding a second class to elements with a pre-existing class. On inspection they only show the original class of, "brick". HTML <!DOCTYPE html> <html> <head> <script src="http://code.jquery.com/jquery-2.1.0.min.js"></script> <meta charset="utf-8"> <title>JS Bin</title> </head> <body> <div id="container"> <div id="button" >Click Me</div> </div> </body> </html> CSS #container { width: 100%; height: 100vh; padding: 10vmax; } #button { position: fixed; } .brick { position: relative; top: 0; height: 10vmax; width: 20vmax; background: white; margin: 0; padding: 0; transition: all 1s; } .drop { transition: all 1s; bottom 0; } The offending JS: var brickCount = function() { var count = prompt("How many boxes you lookin' for?"); for(var i=0; i < count; i++) { var newBrick = document.createElement("div"); newBrick.className="brick"; document.querySelector("#container") .appendChild(newBrick); } }; var getBricks = function(){ document.getElementByClass("brick"); }; var changeColor = function(){ getBricks.style.backgroundColor = '#'+Math.floor(Math.random()*16777215).toString(16); }; var addDrop = function() { getBricks.brick = "getBricks.brick" + " drop"; }; var multiple = function() { brickCount(); getBricks(); changeColor(); addDrop(); }; document.getElementById("button").onclick = function() {multiple();}; Thanks!

    Read the article

  • How can I execute an insert with data from a repeater-generated form whose data source is SQL?

    - by Duke
    I'm storing multilingual data in a database whose model is language normalized (like this). For this particular problem the key for the table in question consists of a value entered by the user and a language from the language table. I'd like to dynamically generate a form with input fields for all available languages. The user inputs a key value then goes down a list of field sets filling out the information in each language. In this case there are two fields for every language, a name and a value (the value is language dependent.) I have all existing information displayed on the page with a gridview, below which I have a formview that is always in insert mode allowing the user to enter new data. Within the formview I have a repeater with an SQLDataSource that gets a list of available languages: <asp:Repeater ID="SessionLocaleRepeater" runat="server" DataSourceID="LocaleSQLDataSource" EnableViewState="false"> <ItemTemplate> <tr> <th scope="row"><%# DataBinder.Eval(Container.DataItem, "LocaleName") %></th> <td>Name:</td> <td><asp:TextBox ID="TextBox1" runat="server" Text="" /></td> <td>Number:</td> <td><asp:TextBox ID="TextBox2" runat="server" Text="" /></td> </tr> </ItemTemplate> </asp:Repeater> I figured that in order to insert this data I'd have to execute my sql server insert stored procedure for each item in the repeater; I am trying to use the formview inserting event. The problem is that the repeater isn't databound to the SQLDataSource until after the formview inserting event (inserting event is in PostBackEvent and databind is in PreRender), which means the controls and data are not available when the inserting event is fired. I tried databinding the repeater during the formview inserting event; the controls were available but the data was not. Would this have something to do with how/when the viewstate information is re-added to the controls? From what I've read, Viewstate is one of the first things to be restored. Given the order of events how can I get the data I need for the insert? I'm open to other solutions to creating dynamic input controls, but they will have to query the database to determine how many sets of controls to create.

    Read the article

  • SVG via dynamic XML+XSL

    - by Daniel
    This is a bit of a vague notion which I have been running over in my head, and which I am very curious if there is an elegant method of solving. Perhaps it should be taken as a thought experiment. Imagine you have an XML schema with a corresponding XSL transform, which renders the XML as SVG in the browser. The XSL generates SVG with appropriate Javascript handlers that, ultimately, implement editing-like functionality such that properties of the objects or their locations on the SVG canvas can be edited by the user. For instance, an element can be dragged from one location to another. Now, this isn't particularly difficult - the drag/drop example is simply a matter of changing the (x,y) coordinates of the SVG object, or a resize operation would be a simple matter of changing its width or height. But is there an elegant way to have Javascript work on the DOM of the source XML document instead of the rendered SVG? Why, you ask? Well, imagine you have very complex XSL transforms, where the modification of one property results in complex changes to the SVG. You want to maintain simplicity in your Javascript code, but also a simple way to persist the modified XML back to the server. Some possibilities of how this may function: After modification of the source DOM, simply re-run the XSL transform and replace the original. Downside: brute force, potentially expensive operation. Create id/class naming conventions in the source and target XML/SVG so elements can be related back to each other, and do an XSL transform on only a subset of the new DOM. In other words, modify temporary DOM, apply XSL to it, remove changed elements from SVG, and insert the new one. Downside: May not be possible to apply XSL to temporary in-browser DOMs(?). Also, perhaps a bit convoluted or ugly to maintain. I think that it may be possible to come up with a framework that handles the second scenario, but the challenge would be making it lightweight and not heavily tied to the actual XML schema. Any ideas or other possibilities? Or is there maybe an existing method of doing this which I'm not aware of? UPDATE: To clarify, as I mentioned in a comment below, this aids in separating the draw code from the edit code. For a more concrete example of how this is useful, imagine an element which determines how it is drawn dependent on the value of a property of an adjacent element. It's better to condense that logic directly in the draw code instead of also duplicating it in the edit code.

    Read the article

  • are C functions declared in <c____> headers guaranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); } EDIT: The closest I've found is this (17.4.1.2p4): Except as noted in clauses 18 through 27, the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C (Clause 7), or ISO/IEC:1990 Programming Languages—C AMENDMENT 1: C Integrity, (Clause 7), as appropriate, as if by inclusion. In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std. which to be honest I could interpret either way. "the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C" tells me that they may be required in the global namespace, but "In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std." says they are in std (but doesn't specify any other scoped they are in).

    Read the article

  • How do I map a one-to-one value type association in an joined-subclass?

    - by David Rubin
    I've got a class hierarchy mapped using table-per-subclass, and it's been working out great: class BasicReport { ... } class SpecificReport : BasicReport { ... } With mappings: <class name="BasicReport" table="reports"> <id name="Id" column="id">...</id> <!-- some common properties --> </class> <joined-subclass name="SpecificReport" table="specificReports" extends="BasicReport"> <key column="id"/> <!-- some special properties --> </joined-subclass> So far, so good. The problem I'm struggling with is how to add a property to one of my subclasses that's both a value type for which I have an IUserType implemented and also mapped via an association: class OtherReport : BasicReport { public SpecialValue V { get; set; } } class SpecialValueUserType : IUserType { ... } What I'd like to do is: <joined-subclass name="OtherReport" table="otherReports" extends="BasicReport"> <key column="id"/> <join table="rptValues" fetch="join"> <key column="rptId"/> <property name="V" column="value" type="SpecialValueUserType"/> </join> </joined-subclass> This accurately reflects the intent, and the pre-existing database schema I'm tied to: the SpecialValue instance is a property of the OtherReport, but is stored in a separate table ("rptValues"). Unfortunately, it seems as though I can't do this, because <join> elements can't be used in <joined-subclass> mappings. <one-to-one> would require creating a class mapping for SpecialValue, which doesn't make any sense given that SpecialValue is just a meaningful scalar. So what can I do? Do I have any options? Right now I'm playing a game with sets: class OtherReport : BasicReport { public SpecialValue V { get { return _values.Count() > 0 ? _values.First() : null; } set { _values.Clear(); _values.Add(value); } } private ICollection<SpecialValue> _values; } With mapping: <joined-subclass name="OtherReport" table="otherReports" extends="BasicReport"> <key column="id"/> <set name="_values" access="field" table="rptValues" cascade="all-delete-orphan"> <key column="rptId" /> <element column="value" type="SpecialValueUserType"/> </set> </joined-subclass> Thanks in advance for the help! I've been banging my head into my desk for several days now.

    Read the article

  • FILE_NOT_FOUND when trying to open COM port C++

    - by Moutabreath
    I am trying to open a com port for reading and writing using C++ but I can't seem to pass the first stage of actually opening it. I get an INVALID_HANDLE_VALUE on the handle with GetLastError FILE_NOT_FOUND. I have searched around the web for a couple of days I'm fresh out of ideas. I have searched through all the questions regarding COM on this website too. I have scanned through the existing ports (or so I believe) to get the name of the port right. I also tried combinations of _T("COM1") with the slashes, without the slashes, with colon, without colon and without the _T I'm using windows 7 on 64 bit machine. this is the code i got I'll be glad for any input on this void SendToCom(char* data, int len) { DWORD cbNeeded = 0; DWORD dwPorts = 0; EnumPorts(NULL, 1, NULL, 0, &cbNeeded, &dwPorts); //What will be the return value BOOL bSuccess = FALSE; LPCSTR COM1 ; BYTE* pPorts = static_cast<BYTE*>(malloc(cbNeeded)); bSuccess = EnumPorts(NULL, 1, pPorts, cbNeeded, &cbNeeded, &dwPorts); if (bSuccess){ PORT_INFO_1* pPortInfo = reinterpret_cast<PORT_INFO_1*>(pPorts); for (DWORD i=0; i<dwPorts; i++) { //If it looks like "COMX" then size_t nLen = _tcslen(pPortInfo->pName); if (nLen > 3) { if ((_tcsnicmp(pPortInfo->pName, _T("COM"), 3) == 0) ){ COM1 =pPortInfo->pName; //COM1 ="\\\\.\\COM1"; HANDLE m_hCommPort = CreateFile( COM1 , GENERIC_READ|GENERIC_WRITE, // access ( read and write) 0, // (share) 0:cannot share the COM port NULL, // security (None) OPEN_EXISTING, // creation : open_existing FILE_FLAG_OVERLAPPED, // we want overlapped operation NULL // no templates file for COM port... ); if (m_hCommPort==INVALID_HANDLE_VALUE) { DWORD err = GetLastError(); if (err == ERROR_FILE_NOT_FOUND) { MessageBox(hWnd,"ERROR_FILE_NOT_FOUND",NULL,MB_ABORTRETRYIGNORE); } else if(err == ERROR_INVALID_NAME) { MessageBox(hWnd,"ERROR_INVALID_NAME",NULL,MB_ABORTRETRYIGNORE); } else { MessageBox(hWnd,"unkown error",NULL,MB_ABORTRETRYIGNORE); } } else{ WriteAndReadPort(m_hCommPort,data); } } pPortInfo++; } } } }

    Read the article

  • How do I create/use a Fluent NHibernate convention to map UInt32 properties to an SQL Server 2008 da

    - by dommer
    I'm trying to use a convention to map UInt32 properties to a SQL Server 2008 database. I don't seem to be able to create a solution based on existing web sources, due to updates in the way Fluent NHibernate works - i.e. examples are out of date. Here's my code as it currently stands (which, when I try to expose the schema, fails due to SQL Server not supporting UInt32). Apologies for the code being a little long, but I'm not 100% sure what is relevant to the problem, so I'm erring on the side of caution. I think I'll need a relatively comprehensive example, as I don't seem to be able to pull the pieces together into a working solution, at present. FluentConfiguration configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(connectionString)) .Mappings(mapping => mapping.AutoMappings.Add( AutoMap.AssemblyOf<Product>() .Conventions.Add<UInt32UserTypeConvention>())); configuration.ExposeConfiguration(x => new SchemaExport(x).Create(false, true)); namespace NHibernateTest { public class UInt32UserTypeConvention : UserTypeConvention<UInt32UserType> { // Empty. } } namespace NHibernateTest { public class UInt32UserType : IUserType { // Public properties. public bool IsMutable { get { return false; } } public Type ReturnedType { get { return typeof(UInt32); } } public SqlType[] SqlTypes { get { return new SqlType[] { SqlTypeFactory.Int32 }; } } // Public methods. public object Assemble(object cached, object owner) { return cached; } public object DeepCopy(object value) { return value; } public object Disassemble(object value) { return value; } public new bool Equals(object x, object y) { return (x != null && x.Equals(y)); } public int GetHashCode(object x) { return x.GetHashCode(); } public object NullSafeGet(IDataReader rs, string[] names, object owner) { int? i = (int?)NHibernateUtil.Int32.NullSafeGet(rs, names[0]); return (UInt32?)i; } public void NullSafeSet(IDbCommand cmd, object value, int index) { UInt32? u = (UInt32?)value; int? i = (Int32?)u; NHibernateUtil.Int32.NullSafeSet(cmd, i, index); } public object Replace(object original, object target, object owner) { return original; } } }

    Read the article

  • SQL query - choosing 'last updated' record in a group, better db design?

    - by Jimmy
    Hi, Let's say I have a MySQL database with 3 tables: table 1: Persons, with 1 column ID (int) table 2: Newsletters, with 1 column ID (int) table 3: Subscriptions, with columns Person_ID (int), Newsletter_ID (int), Subscribed (bool), Updated (Datetime) Subscriptions.Person_ID points to a Person, and Subscription.Newsletter_ID points to a Newsletter. Thus, each person may have 0 or more subscriptions to 0 or more magazines at once. The table Subscriptions will also store the entire history of each person's subscriptions to each newsletter. If a particular Person_ID-Newsletter_ID pair doesn't have a row in the Subscriptions table, then it's equivalent to that pair having a subscription status of 'false'. Here is a sample dataset Persons ID 1 2 3 Newsletters ID 4 5 6 Subscriptions Person_ID Newsletter_ID Subscribed Updated 2 4 true 2010-05-01 3 4 true 2010-05-01 3 5 true 2010-05-10 3 4 false 2010-05-15 Thus, as of 2010-05-16, Person 1 has no subscription, Person 2 has a subscription to Newsletter 4, and Person 3 has a subscription to Newsletter 5. Person 3 had a subscription to Newsletter 4 for a while, but not anymore. I'm trying to do 2 kinds of query. A query that shows everyone's active subscriptions as of query time (we can assume that updated will never be in the future -- thus, this means returning the record with the latest 'updated' value for each Person_ID-Newsletter_ID pair, as long as Subscribed is true (if the latest record for a Person_ID-Newsletter_ID pair has a Subscribed status of false, then I don't want that record returned)). A query that returns all active subscriptions for a specific newsletter - same qualification as in 1. regarding records with 'false' in the Subscribed column. I don't use SQL/databases often enough to tell if this design is good, or if the SQL queries needed would be slow on a database with, say, 1M records in the Subscriptions table. I was using the Visual query builder tool in Visual Studio 2010 but I can't even get the query to return the latest updated record for each Person_ID-Newsletter_ID pair. Is it possible to come up with SQL queries that don't involve using subqueries (presumably because they would become too slow with a larger data set)? If not, would it be a better design to have a separate Subscriptions_History table, and every time a subscription status for a Person_ID-Newsletter-ID pair is added to Subscriptions, any existing record for that pair is moved to Subscriptions_History (that way the Subscriptions table only ever contains the latest status update for any Person_ID-Newsletter_ID pair)? I'm using .net on Windows, so would it be easier (or the same, or harder) to do this kind of queries using Linq? Entity Framework? Thanks!

    Read the article

  • Workaround for stopping propagation with live()?

    - by bobsoap
    I've run into the problem that has been addressed here without a workaround: I can't use stopPropagation() on dynamically spawned elements. I've tried creating a condition to exclude a click within the dimensions of the spawned element, but that doesn't seem to work at all. Here is what I got: 1) a large background element ("canvas") that is activated to be "sensitive to clicks on it" by a button 2) the canvas, if activated, catches all clicks on it and spawns a small child form ("child") within it 3) the child is positioned relative to the mouse click position. If the mouse click was on the right half of the canvas, the child will be positioned 200 pixels to the left of that spot. (On the right if the click was in the left half) 4) every new click on the canvas removes the existing child (if any) and spawns a new child at the new position (relative to the click) The problem: Since the spawned child element is on top of the canvas, a click on it counts as a click on the canvas. Even if the child is outside of the boundaries of the canvas, clicking on it will trigger the action as described in 4) again . This shouldn't happen. =========== CODE: The button to activate the canvas: $('a#activate').click(function(event){ event.preventDefault(); canvasActive(); }); I referenced the above to show you that the canvas click-catching is happening in a function. Not sure if this is relevant... This is the function that catches clicks on the canvas: function canvasActive() { $('#canvas').click(function(e){ e.preventDefault(); //get click position relative to canvas posClick = { x : Math.round(e.pageX - $(this).offset().left), y : Math.round(e.pageY - $(this).offset().top) }; //calculate child position if(posClick.x <= $canvas.outerWidth(false)/2) { posChild = { x: posClick.x + 200, //if dot is on the left side of canvas y: posClick.y }; } else { posChild = { x: posClick.x - 600, //if dot is on the right y: posClick.y }; } $(this).append(markup); //markup is just the HTML for the child }); } I left out the unimportant stuff. The question is: How can I prevent a click inside of a spawned child from executing the function? I tried getting the child's dimensions and doing something like "if posClick is within this range, don't do anything" - but I can't seem to get it right. Perhaps someone has come across this dilemma before. Any help is appreciated.

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >