Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 542/632 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • JSP includes and MVC pattern

    - by xingyu
    I am new to JSP/Servlets/MVC and am writing a JSP page (using Servlets and MVC pattern) that displays information about recipies, and want the ability for users to "comment" on it too. So for the Servlet, on doGet(), it grabs all the required info into a Model POJO and forwards the request on to a JSP View for rendering. That is working just fine. I'd like the "comment" part to be a separate JSP, so on the RecipeView.jsp I can use to separate these views out. So I've made that, but am now a little stuck. The form in the CommentOnRecipe.jsp posts to a CommentAction servlet that handles the recording of the comment just fine. So when I reload the Recipe page, I can see the comment I just made. I'd like to: Reload the page automatically after commenting (no AJAX for now) Block the user from making more than one comment on each Recipe over a 1 day timeframe (via a Cookie). So I store a cookie indicating the product ID whenever the user makes a comment, so we can check this later? How would it work in a MVC context? Show a message to the user that they have already commented on the Recipe when they visit one which they have commented on I'm confused about using beans/including JSPs etc on how to achieve this. I know in ASP.NET land, it would be a UseControl that I would place on a page, or in ASP.NET MVC, it would be a PartialView of some sort. I'm just confused with the way this works in a JSP/Servlets/MVC context.

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • Why Is the sender type null when dealing with events

    - by ChloeRadshaw
    From C# Via CLR: Note A lot of people wonder why the event pattern requires the sender parameter to always be of type Object After all, since the MailManager will be the only type raising an event with a NewMail EventArgs object, it makes more sense for the callback method to be prototyped like this: void MethodName(Mai l Manager sender, NewMail EventArgs e); The pattern requires the sender parameter to be of type Object mostly because of inheritance What if Mai lManager were used as a base class for SmtpMailManager? In this case, the callback method should have the sender parameter prototyped as SmtpMailManager instead of Mail Manager, but this can’t happen because SmtpMai lManager just inherited the NewMai l event So the code that was expecting SmtpMail Manager to raise the event must still have to cast the sender argument to SmtpMailManager In other words, the cast is still required, so the sender parameter might as well be typed as Obj ect The next reason for typing the sender parameter as Obj ect is just fexibility It allows the delegate to be used by multiple types that offer an event that passes a NewMail EventArgs object For example, a PopMai lManager class could use the delegate even if this class were not derived from Mail Manager I just simply cannot understand why the sender is an object - Why can it not be generified? so most of the time we do not need to do generic casts

    Read the article

  • PHP OO vs Procedure with AJAX

    - by vener
    I currently have a AJAX heavy(almost everything) intranet webapp for a business. It is highly modularized(components and modules ala Joomla), with plenty of folders and files. ~ 80-100 different viewing pages (each very unique in it's own sense) on last count and will likely to increase in the near future. I based around the design around commands and screens, the client request a command and sends the required data and receives the data that is displayed via javascript on the screen. That said, there are generally two types of files, a display files with html, javascript, and a little php for templating. And also a php backend file with a single switch statement with actions such as, save, update and delete and maybe other function. There is very little code reuse. Recently, I have been adding an server sided undo function that requires me to reuse some code. So, I took the chance to try out OOP but I notice that some functions are so simple, that creating a class, retrieving all the data then update all the related rows on the database seems like overkill for a simple action as speed is quite critical. Also I noticed there is only one class in an entire file. So, what if the entire php is a class. So, between creating a class and methods, and using global variables and functions. Which is faster?

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • Ruby 1.9: turn these 4 arrays into hash of key/value pairs

    - by randombits
    I have four arrays that are coming in from the client. Let's say that there is an array of names, birth dates, favorite color and location. The idea is I want a hash later where each name will have a hash with respective attributes: Example date coming from the client: [name0, name1, name2, name3] [loc0, loc1] [favcololor0, favcolor1] [bd0, bd1, bd2, bd3, bd4, bd5] Output I'd like to achieve: name0 => { location => loc0, favcolor => favcolor0, bd => bd0 } name1 => { location => loc1, favcolor => favcolor1, bd => bd1 } name2 => { location => nil, favcolor => nil, bd => bd2 } name3 => { location => nil, favcolor => nil, bd => bd3 } I want to have an array at the end of the day where I can iterate and work on each particular person hash. There need not be an equivalent number of values in each array. Meaning, names are required.. and I might receive 5 of them, but I only might receive 3 birth dates, 2 favorite colors and 1 location. Every missing value will result in a nil. How does one make that kind of data structure with Ruby 1.9?

    Read the article

  • Excel VBA: Error Handling with Case Statement

    - by AME
    I am trying to validate a file that is uploaded by the user using the code below. The error handler checks the top row of the uploaded file for three specific column names. If one or more of the column names is not present, the program should return a prompt to the user notifying them which column(s) are missing from the file that they uploaded and then close the file. There are a couple issues with my current VBA code that I am seeking help with: The prompt doesn't specify which column(s) are missing to the user. The error handler is triggered even when all required columns are present in the uploaded file. Code: Sub getworkbook() ' Get workbook... Dim ws As Worksheet Dim filter As String Dim targetWorkbook As Workbook, wb As Workbook Dim Ret As Variant Set targetWorkbook = Application.ActiveWorkbook ' get the customer workbook filter = ".xlsx,.xls" caption = "Please select an input file " Ret = Application.GetOpenFilename(filter, , caption) If Ret = False Then Exit Sub Set wb = Workbooks.Open(Ret) On Error GoTo ErrorLine: 'Check for columns var1 = ActiveSheet.Range("1:1").Find("variable1", LookIn:=xlValues, LookAt:=xlWhole, MatchCase:=True).Column var2 = ActiveSheet.Range("1:1").Find("variable2", LookIn:=xlValues, LookAt:=xlWhole, MatchCase:=True).Column var3 = ActiveSheet.Range("1:1").Find("variable3", LookIn:=xlValues, LookAt:=xlWhole, MatchCase:=True).Column ErrorLine: MsgBox ("The selected file is missing a key data column, please upload a correctly formated file.") If Error = True Then ActiveWorkSheet.Close wb.Sheets(1).Move Before:=targetWorkbook.Sheets("Worksheet2") ActiveSheet.Name = "DATA" End Sub

    Read the article

  • Javascript help needed - which variable is return empty??

    - by mathew
    Hi I would like to know how do I add an error check to below mentioned code...I mean how do I check if this code return empty or not?? if this returns empty then I would give a message "Not Found".. How do I do That?? google.load('search', '1'); var blogSearch; function searchComplete() { // Check that we got results document.getElementById('content').innerHTML = ''; if (blogSearch.results && blogSearch.results.length > 0) { for (var i = 0; i < blogSearch.results.length; i++) { // Create HTML elements for search results var p = document.createElement('p'); var a = document.createElement('a'); a.href = blogSearch.results[i].postUrl; a.innerHTML = blogSearch.results[i].title; // Append search results to the HTML nodes p.appendChild(a); document.body.appendChild(p); } } } function onLoad() { // Create a BlogSearch instance. blogSearch = new google.search.BlogSearch(); // Set searchComplete as the callback function when a search is complete. The // blogSearch object will have results in it. blogSearch.setSearchCompleteCallback(this, searchComplete, null); // Set a site restriction blogSearch.setSiteRestriction('blogspot.com'); // Execute search query blogSearch.execute('1974 Chevrolet Caprice'); // Include the required Google branding google.search.Search.getBranding('branding'); } // Set a callback to call your code when the page loads google.setOnLoadCallback(onLoad);

    Read the article

  • Which pattern to use for logging? Dependency Injection or Service Locator?

    - by andlju
    Consider this scenario. I have some business logic that now and then will be required to write to a log. interface ILogger { void Log(string stuff); } interface IDependency { string GetInfo(); } class MyBusinessObject { private IDependency _dependency; public MyBusinessObject(IDependency dependency) { _dependency = dependency; } public string DoSomething(string input) { // Process input var info = _dependency.GetInfo(); var intermediateResult = PerformInterestingStuff(input, info); if (intermediateResult== "SomethingWeNeedToLog") { // How do I get to the ILogger-interface? } var result = PerformSomethingElse(intermediateResult); return result; } } How would you get the ILogger interface? I see two main possibilities; Pass it using Dependency Injection on the constructor. Get it via a singleton Service Locator. Which method would you prefer, and why? Or is there an even better pattern? Update: Note that I don't need to log ALL method calls. I only want to log a few (rare) events that may or may not occur within my method.

    Read the article

  • C++: Constructor/destructor unresolved when not inline?

    - by Anamon
    In a plugin-based C++ project, I have a TmpClass that is used to exchange data between the main application and the plugins. Therefore the respective TmpClass.h is included in the abstract plugin interface class that is included by the main application project, and implemented by each plugin. As the plugins work on STL vectors of TmpClass instances, there needs to be a default constructor and destructor for the TmpClass. I had declared these in TmpClass.h: class TmpClass { TmpClass(); ~TmpClass(); } and implemented them in TmpClass.cpp. TmpClass::~TmpClass() {} TmpClass::TmpClass() {} However, when compiling plugins this leads to the linker complaining about two unresolved externals - the default constructor and destructor of TmpClass as required by the std::vector<TmpClass> template instantiation - even though all other functions I declare in TmpClass.h and implement in TmpClass.cpp work. As soon as I remove the (empty) default constructor and destructor from the .cpp file and inline them into the class declaration in the .h file, the plugins compile and work. Why is it that the default constructor and destructor have to be inline for this code to compile? Why does it even maatter? (I'm using MSVC++8).

    Read the article

  • a basic issue in implementing validations through properties ? Please guide me.

    - by haansi
    hello, thanks for your attention and time. I want to implement validations in settter of properties. Here is an issue where your expert help is required please. I have idea of how I will do validations before setting value. but not getting what to do if passed value is not correct. Just not setting is not a acceptable solution as I want to return an appropriate message to user (in a label in web form). My example code is: private int id; public int Id { get { return id; } set { bool result = IsNumber(value); if (result==false) { // What to do if passed data is not valid ? how to give a appropriate message to user that what is wrong ? } id = value; } } A thought was to use return but it is not allowed. Throwing error looks not good as generally we avoid thorwing custom errors. Please guide and help me. thanks in anticipation haansi

    Read the article

  • C++ private inheritance and static members/types

    - by WearyMonkey
    I am trying to stop a class from being able to convert its 'this' pointer into a pointer of one of its interfaces. I do this by using private inheritance via a middle proxy class. The problem is that I find private inheritance makes all public static members and types of the base class inaccessible to all classes under the inheriting class in the hierarchy. class Base { public: enum Enum { value }; }; class Middle : private Base { }; class Child : public Middle { public: void Method() { Base::Enum e = Base::value; // doesn't compile BAD! Base* base = this; // doesn't compile GOOD! } }; I've tried this in both VS2008 (the required version) and VS2010, neither work. Can anyone think of a workaround? Or a different approach to stopping the conversion? Also I am curios of the behavior, is it just a side effect of the compiler implementation, or is it by design? If by design, then why? I always thought of private inheritance to mean that nobody knows Middle inherits from Base. However, the exhibited behavior implies private inheritance means a lot more than that, in-fact Child has less access to Base than any namespace not in the class hierarchy!

    Read the article

  • How is the C++ synthesized move constructor affected by volatile and virtual members?

    - by user1827766
    Look at the following code: struct node { node(); //node(const node&); //#1 //node(node&&); //#2 virtual //#3 ~node (); node* volatile //#4 next; }; main() { node m(node()); //#5 node n=node(); //#6 } When compiled with gcc-4.6.1 it produces the following error: g++ -g --std=c++0x -c -o node.o node.cc node.cc: In constructor node::node(node&&): node.cc:3:8: error: expression node::next has side-effects node.cc: In function int main(): node.cc:18:14: note: synthesized method node::node(node&&) first required here As I understand the compiler fails to create default move or copy constructor on line #6, if I uncomment either line #1 or #2 it compiles fine, that is clear. The code compiles fine without c++0x option, so the error is related to default move constructor. However, what in the node class prevents default move constructor to be created? If I comment any of the lines #3 or #4 (i.e. make the destructor non-virtual or make data member non-volatile) it compiles again, so is it the combination of these two makes it not to compile? Another puzzle, line #5 does not cause an compilation error, what is different from line #6? Is it all specific for gcc? or gcc-4.6.1?

    Read the article

  • Adding cancel ability and exception handling to async code.

    - by Rob
    I have this sample code for async operations (copied from the interwebs) public class LongRunningTask { public LongRunningTask() { //do nowt } public int FetchInt() { Thread.Sleep(2000); return 5; } } public delegate TOutput SomeMethod<TOutput>(); public class GoodPerformance { public void BeginFetchInt() { LongRunningTask lr = new LongRunningTask(); SomeMethod<int> method = new SomeMethod<int>(lr.FetchInt); // method is state object used to transfer result //of long running operation method.BeginInvoke(EndFetchInt, method); } public void EndFetchInt(IAsyncResult result) { SomeMethod<int> method = result.AsyncState as SomeMethod<int>; Value = method.EndInvoke(result); } public int Value { get; set; } } Other async approaches I tried required the aysnc page attribute, they also seemed to cancel if other page elements where actioned on (a button clicked), this approach just seemed to work. I’d like to add a cancel ability and exception handling for the longRunningTask class, but don’t erm, really know how.

    Read the article

  • i18n - What are some naming-convention to use in creating language files?

    - by John Himmelman
    I'm developing a CMS that required i18n support. The translation strings are stored as an array in a language file (ie, en.php). Are there any naming conventions for this.. How can I improve on the sample language file below... // General 'general.title' => 'CMS - USA / English', 'general.save' => 'Save', 'general.choose_category' => 'Choose category', 'general.add' => 'Add', 'general.continue' => 'Continue', 'general.finish' => 'Finish', // Navigation 'nav.categories' => 'Categories', 'nav.products' => 'Products', 'nav.collections' => 'Collections', 'nav.styles' => 'Styles', 'nav.experts' => 'Experts', 'nav.shareyourstory' => 'Share Your Story', // Products 'cms.products' => 'Products', 'cms.add_product' => 'Add Product', // Categories 'cms.categories' => 'Categories', 'cms.add_category' => 'Add Category', // Collections 'cms.collections'=> 'Collections', 'cms.add_collections' => 'Add Collection', // Stylists 'cms.styles' => 'Stylists', 'cms.add_style' => 'Add Style', 'cms.add_a_style' => 'Add a style', // Share your story 'cms.share_your_story' => 'Share Your Story', // Styles 'cms.add_style' => 'Add Style',

    Read the article

  • Time taken for memcpy decreases after certain point

    - by tss
    I ve a code which increases the size of the memory(identified by a pointer) exponentially. Instead of realloc, I use malloc followed by memcpy.. Something like this.. int size=5,newsize; int *c = malloc(size*sizeof(int)); int *temp; while(1) { newsize=2*size; //begin time temp=malloc(newsize*sizeof(int)); memcpy(temp,c,size*sizeof(int)); //end time //print time in mili seconds c=temp; size=newsize; } Thus the number of bytes getting copied is increasing exponentially. The time required for this task also increases almost linearly with the increase in size. However after certain point, the time taken abruptly reduces to a very small value and then remains constant. I recorded time for similar code, copyin data(Of my own type) 5 -> 10 - 2 ms 10 -> 20 - 2 ms . . 2560 -> 5120 - 5 ms . . 20480 -> 40960 - 30 ms 40960 -> 91920 - 58 ms 367680 -> 735360 - 2 ms 735360 -> 1470720 - 2 ms 1470720 -> 2941440 - 2 ms What is the reason for this drop in time ? Does a more optimal memcpy method get called when the size is large ?

    Read the article

  • What is the return type of my linq query?

    - by Ulhas Tuscano
    I have two tables A & B. I can fire Linq queries & get required data for individual tables. As i know what each of the tables will return as shown in example. But, when i join both the tables i m not aware of the return type of the Linq query. This problem can be solved by creating a class which will hold ID,Name and Address properties inside it. but,everytime before writing a join query depending on the return type i will have to create a class which is not a convinient way Is there any other mathod available to achieve this private IList<A> GetA() { var query = from a in objA select a; return query.ToList(); } private IList<B> GetB() { var query = from b in objB select b; return query.ToList(); } private IList<**returnType**?> GetJoinAAndB() { var query = from a in objA join b in objB on a.ID equals b.AID select new { a.ID, a.Name, b.Address }; return query.ToList(); }

    Read the article

  • Hovering a div shows hidden div - jquery to prototype conversion

    - by phil
    I may get killed for this but I have been trying for a few days using Prototype to show a hidden div when hovering over another div. I have this working fine in jquery but I could use some help porting it over to prototype. The code sample: <script type="text/javascript"> $(document).ready(function(){ $(".recent-question").hover(function(){ $(this).find(".interact").fadeIn(2.0); }, function(){ $(this).find(".interact").fadeOut(2.0); }); }); </script> <div class="recent-question"> <img src="images/new/img-sample.gif" alt="" width="70" height="60" /> <div class="question-text"> <h3>Heading</h3> <p><a href="#">Yadda Yadda Yadda</p> </div> <div class="interact" style="display:none;"> <ul> <li><a href="#">Choice1</a></li> <li><a href="#">Choice2</a></li> <li><a href="#">Choice3</a></li> </ul> </div> </div> So basically when I hover over a recent-question div i would like the div.interact to fade in or appear at all. The above code is for jquery but I am required to use prototype for this project. Any help converting would be greatly appreciated. Thanks!

    Read the article

  • Preventing a child process (HandbrakeCLI) from causing the parent script to exit

    - by Chris
    I have a batch conversion script to turn .mkvs of various dimensions into ipod/iphone sized .mp4s, cropping/scaling to suit. Determining the original dimensions, required crop, output file is all working fine. However, on successful completion of the first conversion, HandbrakeCLI causes the parent script to exit. Why would this be? And how can I stop it? The code, as it currently stands: #!/bin/bash find . -name "*.mkv" | while read FILE do # What would the output file be? DST=../Touch/$(dirname "$FILE") MKV=$(basename "$FILE") MP4=${MKV%%.mkv}.mp4 # If it already exists, don't overwrite it if [ -e "$DST/$MP4" ] then echo "NOT overwriting $DST/$MP4" else # Stuff to determine dimensions/cropping removed for brevity HandbrakeCLI --preset "iPhone & iPod Touch" --vb 900 --crop $crop -i "$FILE" -o "$DST/$MP4" > /dev/null 2>&1 if [ $? != 0 ] then echo "$FILE had problems" >> errors.log fi fi done I have additionally tried it with a trap, but that didn't change the behaviour (although the last trap did fire) trap "echo Handbrake SIGINT-d" SIGINT trap "echo Handbrake SIGTERM-d" SIGTERM trap "echo Handbrake EXIT-d" EXIT trap "echo Handbrake 0-d" 0

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • ASP ListView - Eval() as formatted number, Bind() as unformatted?

    - by chucknelson
    I have an ASP ListView, and have a very simple requirement to display numbers as formatted w/ a comma (12,123), while they need to bind to the database without formatting (12123). I am using a standard setup - ListView with a datasource attached, using Bind(). I converted from some older code, so I'm not using ASP.NET controls, just form inputs...but I don't think it matters for this: <asp:SqlDataSource ID="MySqlDataSource" runat="server" ConnectionString='<%$ ConnectionStrings:ConnectionString1 %>' SelectCommand="SELECT NUMSTR FROM MY_TABLE WHERE ID = @ID" UpdateCommand= "UPDATE MY_TABLE SET NUMSTR = @NUMSTR WHERE ID = @ID"> </asp:SqlDataSource> <asp:ListView ID="MyListView" runat="server" DataSourceID="MySqlDataSource"> <LayoutTemplate> <div id="itemplaceholder" runat="server"></div> </LayoutTemplate> <ItemTemplate> <input type="text" name="NUMSTR" ID="NUMSTR" runat="server" value='<%#Bind("NUMSTR")%>' /> <asp:Button ID="UpdateButton" runat="server" Text="Update" Commandname="Update" /> </ItemTemplate> </asp:ListView> In the example above, NUMSTR is a number, but stored as a string in a SqlServer 2008 database. I'm also using the ItemTemplate as read and edit templates, to save on duplicate HTML. In the example, I only get the unformatted number. If I convert the field to an integer (via the SELECT) and use a format string like Bind("NUMSTR", "{0:###,###}"), it writes the formatted number to the database, and then fails when it tries to read it again (can't convert with the comma in there). Is there any elegant/simple solution to this? It's so easy to get the two-way binding going, and I would think there has to be a way to easily format things as well... Oh, and I'm trying to avoid the standard ItemTemplate and EditItemTemplate approach, just for sheer amount of markup required for that. Thanks!

    Read the article

  • Custom bean instantiation logic in Spring MVC

    - by Michal Bachman
    I have a Spring MVC application trying to use a rich domain model, with the following mapping in the Controller class: @RequestMapping(value = "/entity", method = RequestMethod.POST) public String create(@Valid Entity entity, BindingResult result, ModelMap modelMap) { if (entity== null) throw new IllegalArgumentException("An entity is required"); if (result.hasErrors()) { modelMap.addAttribute("entity", entity); return "entity/create"; } entity.persist(); return "redirect:/entity/" + entity.getId(); } Before this method gets executed, Spring uses BeanUtils to instantiate a new Entity and populate its fields. It uses this: ... ReflectionUtils.makeAccessible(ctor); return ctor.newInstance(args); Here's the problem: My entities are Spring managed beans. The reason for this is to inject DAOs on them. Instead of calling new, I use EntityFactory.createEntity(). When they're retrieved from the database, I have an interceptor that overrides the public Object instantiate(String entityName, EntityMode entityMode, Serializable id) method and hooks the factories into that. So the last piece of the puzzle missing here is how to force Spring to use the factory rather than its own BeanUtils reflective approach? Any suggestions for a clean solution? Thanks very much in advance.

    Read the article

  • calling constructor of the class in the destructor of the same class

    - by dicaprio
    Experts !! I know this question is one of the lousy one , but still I dared to open my mind , hoping I would learn from all. I was trying some examples as part of my routine and did this horrible thing, I called the constructor of the class from destructor of the same class. I don't really know if this is ever required in real programming , I cant think of any real time scenarios where we really need to call functions/CTOR in our destructor. Usually , destructor is meant for cleaning up. If my understanding is correct, why the compiler doesn't complain ? Is this because it is valid for some good reasons ? If so what are they ? I tried on Sun Forte, g++ and VC++ compiler and none of them complain about it. using namespace std; class test{ public: test(){ cout<<"CTOR"<<endl; } ~test() {cout<<"DTOR"<<endl; test(); }};

    Read the article

  • Why would one want to use the public constructors on Boolean and similar immutable classes?

    - by Robert J. Walker
    (For the purposes of this question, let us assume that one is intentionally not using auto(un)boxing, either because one is writing pre-Java 1.5 code, or because one feels that autounboxing makes it too easy to create NullPointerExceptions.) Take Boolean, for example. The documentation for the Boolean(boolean) constructor says: Note: It is rarely appropriate to use this constructor. Unless a new instance is required, the static factory valueOf(boolean) is generally a better choice. It is likely to yield significantly better space and time performance. My question is, why would you ever want to get a new instance in the first place? It seems like things would be simpler if constructors like that were private. For example, if they were, you could write this with no danger (even if myBoolean were null): if (myBoolean == Boolean.TRUE) It'd be safe because all true Booleans would be references to Boolean.TRUE and all false Booleans would be references to Boolean.FALSE. But because the constructors are public, someone may have used them, which means that you have to write this instead: if (Boolean.TRUE.equals(myBoolean)) But where it really gets bad is when you want to check two Booleans for equality. Something like this: if (myBooleanA == myBooleanB) ...becomes this: if ( (myBooleanA == null && myBooleanB == null) || (myBooleanA == null && myBooleanA.equals(myBooleanB)) ) I can't think of any reason to have separate instances of these objects which is more compelling than not having to do the nonsense above. What say you?

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >