Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 698/842 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • Advice on improving programming skills, learning capabilities?

    - by anonymous-coward1234
    Hi all, After 2,5 years of professional Java programing, I still have problems that make my job difficult and, more importantly - more times that I would like to admit - not enjoyable. I would like to ask for advice by more experienced people on ways that would help me overcome them. These are the problems I have: I do not absorb new knowledge easily. Even when I understand something, after a couple of days I easily forget even basic stuff. Other co-workers, even with the same working experience, when reading new technologies put things easily into "context", and are able to compare in "real time| similar technologies they already have used. I always try to address all the issues to whatever I am doing at one go, which results in me trying to resolve too many problems at the same time, losing completely control. I find it difficult to make my mind on a single problem that I should address first, and even when I do, and find myself throwing away code that I wrote because I started addressing the wrong issue first. As far as architecture and data modeling is concerned, I have difficulty making decisions on what objects must be created, with what hierarchy, interfaces, abstraction etc. I imagine that - to a certain degree - these things come with experience. But after 2,5 years of Java programming, I would expect myself to have come much farther that I have come, both in terms of absorption and experience. Is there a way to improve my learning speed? Any books, methods, advice is welcome.

    Read the article

  • Comparing values from a string in a MySQL query

    - by bellesebastien
    I'm having some trouble comparing values found in VARCHAR fields. I have a table with products and each product has volume. I store the volume in a VARCHAR field and it's usually a number (30, 40, 200..) but there are products that have multiple volumes and their data is stored separated by semicolons, like so 30;60;80. I know that storing multiple volumes like that is not recommended but I have to work with it like it is. I'm trying to implement a search by volume function for the products. I want to also display the products that have a bigger or equal volume than the one searched. This is not a problem with the products that have a single volume, but it is a problem with the multiple volume products. Maybe an example will make things clearer: Let's say I have a product with this in it's volume field: 30;40;70;80. If someone searched for a volume, lets say 50, I want that product to be displayed. To do this I was thinking of writing my own custom MySQL function (I've never this before) but maybe someone can offer a different solution. I apologize for my poor English but I hope I made my question clear. Thanks.

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • How can I display multiple django modelformset forms in a grouped fieldsets?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results (after form validation) base_models = base_formset.save(commit=False) like_models = like_formset.save(commit=False) for base_model, likes_model in map(None, base_models, likes_models): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements. EDIT: Updated the problem statement to be more clear about what exactly I'm processing (I'm processing models not forms in the for loop)

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • SqlBackup from my client app

    - by Robbert Dam
    I have an client app that runs on several machines, and connects to a single SQL database. On the client app, the user has the possiblity to use a local SQL (CE) database or connect to a remote SQL database. There is also a box where a backup location can be set. For the backup of the remote SQL database I use the following code: var bdi = new BackupDeviceItem(backupFile, DeviceType.File); var backup = new Backup { Database = "AppDb", Initialize = true }; backup.Devices.Add(bdi); var server = new Server(connection); backup.SqlBackup(server); When developing my application, I wasn't aware that the given backupFile is written on the machine where the SQL server runs! And not on the client machine. What I want is "dump" all the data from my database to a local file. (Why? Because users may enter a network location in the "backup location" box, and backing up SQL immediately to a network location fails. So need to write local first and then copy to the network in that case.) I there an alternative to the method above?

    Read the article

  • Perf4J Logging Config Help

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

  • JSP Document/JSPX: what determines how tabs/space/linebreaks are removed in the output?

    - by NoozNooz42
    I've got a "JSP Document" ("JSP in XML") nicely formatted and when the webpage is generated and sent to the user, some linebreaks are removed. Now the really weird part: apparently the "main" .jsp always gets all its linebreak removed but for any subsequent .jsp included from the main .jsp, linebreaks seems to be randomly removed (some are there, others aren't). For example, if I'm looking at the webpage served from Firefox and ask to "view source", I get to see what is generated. So, what determines when/how linebreaks are kept/removed? This is just an example I made up... Can you force a .jsp to serve this: <body><div id="page"><div id="header"><div class="title">... or this: <body> <div id="page"> <div id="header"> <div class="title">... ? I take it that linebreaks are removed to save on bandwidth, but what if I want to keep them? And what if I want to keep the same XML indentation as in my .jsp file? Is this doable? EDIT Following skaffman's advice, I took a look at the generated .java files and the "main" one doesn't have lots of out.write but not a single one writing tabs nor newlines. Contrary to that file, all the ones that I'm including from that main .jsp have lots of lines like: out.write("\t...\n"); So I guess my question stays exactly the same: what determines how tabs/space/linebreaks are included/removed in the output?

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • How do you think while formulating Sql Queries. Is it an experience or a concept ?

    - by Shantanu Gupta
    I have been working on sql server and front end coding and have usually faced problem formulating queries. I do understand most of the concepts of sql that are needed in formulating queries but whenever some new functionality comes into the picture that can be dont using sql query, i do usually fails resolving them. I am very comfortable with select queries using joins and all such things but when it comes to DML operation i usually fails For every query that i never done before I usually finds uncomfortable with that while creating them. Whenever I goes for an interview I usually faces this problem. Is it their some concept behind approaching on formulating sql queries. Eg. I need to create an sql query such that A table contain single column having duplicate record. I need to remove duplicate records. I know i can find the solution to this query very easily on Googling, but I want to know how everyone comes to the desired result. Is it something like Practice Makes Man Perfect i.e. once you did it, next time you will be able to formulate or their is some logic or concept behind. I could have get my answer of solving above problem simply by posting it on stackoverflow and i would have been with an answer within 5 to 10 minutes but I want to know the reason. How do you work on any new kind of query. Is it a major contribution of experience or some an implementation of concepts. Whenever I learns some new thing in coding section I tries to utilize it wherever I can use it. But here scenario seems to be changed because might be i am lagging in some concepts.

    Read the article

  • NSTableView Drag and Drop not working

    - by macatomy
    Hi, I'm trying to set up very basic drag and drop for my NSTableView. The table view has a single column (with a custom cell). The column is bound to an NSArrayController, and the array controller's content array is bound to an NSArray on my controller object. The data displays fine in the table. I connected the dataSource and delegate outlets of the table view to my controller object, and then implemented these methods: - (BOOL)tableView:(NSTableView *)aTableView writeRowsWithIndexes:(NSIndexSet *)rowIndexes toPasteboard:(NSPasteboard *)pboard { NSLog(@"dragging"); return YES; } - (NSDragOperation)tableView:(NSTableView*)tv validateDrop:(id <NSDraggingInfo>)info proposedRow:(NSInteger)row proposedDropOperation:(NSTableViewDropOperation)op { return NSDragOperationEvery; } - (BOOL)tableView:(NSTableView *)aTableView acceptDrop:(id <NSDraggingInfo>)info row:(NSInteger)row dropOperation:(NSTableViewDropOperation)operation { return YES; } I also registered the drag types in -awakeFromNib: #define MyDragType @"MyDragType" - (void)awakeFromNib { [super awakeFromNib]; [_myTable registerForDraggedTypes:[NSArray arrayWithObjects:MyDragType, nil]]; } The problem is that the -tableView:writeRowsWithIndexes:toPasteboard: method is never called. I've looked at a bunch of examples and I can't figure out anything I'm doing wrong. Could the problem be that I'm using a custom cell? Is there something I'm supposed to override in the cell subclass to enable this functionality? EDIT: Confirmed. Switching the custom cell for a regular NSTextFieldCell made dragging work. Now, how do I make drag and drop work with my custom cell?

    Read the article

  • Prevent empty tooltips at a wpf datagrid

    - by TheCalendarProgrammer
    I am working on a calendar program, which consists mainly of a WPF DataGrid. As there is not always enough space to display all the entries of a day (which is a DataGridCell), a tooltip with all the entries of the day shell appear at mouse over. This works so far with the code snippet shown below. And now the (little) problem: If there are no entries for a day, no tooltip shell pop up. With the code below an empty tooltip pops up. <DataGridTemplateColumn x:Name="Entry" IsReadOnly="True"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <Grid> <TextBlock Text="{Binding EntryText}" Foreground="{Binding EntryForeground}" FontWeight="{Binding EntryFontWeight}"> </TextBlock> <TextBlock Text="{Binding RightAlignedText}" Foreground="Gray" Background="Transparent"> <TextBlock.ToolTip> <TextBlock Text="{Binding AllEntriesText}"/> </TextBlock.ToolTip> </TextBlock> </Grid> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> The Databinding is made via myCalDataGrid.Itemssource = _listOfDays; in code behind, where a 'Day' is the view model for a single calendar row.

    Read the article

  • wordpress conditional statements

    - by codedude
    I am using this code in wordpress to display different content when different pages are loaded. I have 5 pages on the site called Home, Bio, Work, Contact and Notes. The Notes page is being used as a blog. Here is the code I am using. <?php if (is_page('contact')) { ?> get in touch with me <?php } elseif (is_single()) { ?> a note from me <?php } elseif (is_page('notes')) { ?> the notes of me <?php } else { ?> the <?php the_title(); ?> of me <?php } ?> So if it the contact page, it displays "get in touch with me" and if it is a single blog post page it displays "a note from me". However this is where I have a problem. The next statement should display "the notes of me" when it is on the Notes page. However, this does not happen. Instead it shows the default content which is in the "else" statement. any idea on why this is happening?

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • Multicore solr on Ubuntu 10.04 working for anyone?

    - by coleifer
    Following instructions from the two sites below, I've installed tomcat6 and solr 1.4 http://gist.github.com/204638 https://wiki.fourkitchens.com/display/TECH/Solr+1.4+on+Ubuntu+9.10+and+CentOS+5 I have successfully got it up and running on a server running 9.04 with multicore support, but on the 10.04 I can't seem to get it to work. I am able to reach localhost:xxxx/solr/ on the 10.04 box and see a single link to the Solr Admin, but following the link takes me to a 404 page with the following output: /solr/admin/ HTTP Status 404 - missing core name in path The requested resource (missing core name in path) is not available I am also unable to access /solr/site1/ as I would except - it similarly returns a 404 <!-- from /var/solr/solr.xml, site dirs exist --> <cores adminPath="/admin/cores"> <core name="site1" instanceDir="site1" /> <core name="site2" instanceDir="site2" /> </cores> <!-- from /etc/tomcat6/Catalina/localhost/solr.xml --> <Context docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context>

    Read the article

  • Ember model is gone when I use the renderTemplate hook

    - by Mickael Caruso
    I have a single template - editPerson.hbs <form role="form"> FirstName: {{input type="text" value=model.firstName }} <br/> LastName: {{input type="text" value=model.lastName }} </form> I want to render this template when the user wants to edit an existing person or create a new person. So, I set up routes: App.Router.map(function(){ this.route("createPerson", { path: "/person/new" }); this.route("editPerson", { path: "/person/:id}); // other routes not show for brevity }); So, I define two routes - one for create and one for edit: App.CreatePersonRoute = Ember.Route.extend({ renderTemplate: function(){ this.render("editPerson", { controller: "editPerson" }); }, model: function(){ return {firstName: "John", lastName: "Smith" }; } }); App.EditPersonRoute = Ember.Route.extend({ model: function(id){ return {firstName: "John Existing", lastName: "Smith Existing" }; } }); So, I hard-code the models. I'm concerned about the createPerson route. I'm telling it to render the editPersonTemplate and to use the editPerson controller (which I don't show because I don't think it matters - but I made one, though.) When I use renderTemplate, I lose the model John Smith, which in turn, won't display on the editTemplate on the web page. Why? I "fixed" this by creating a separate and identical (to editPerson.hbs) createPerson.hbs, and removing the renderTemplate hook in the CreatePerson. It works as expected, but I find it somewhat troubling to have a separate and identical template for the edit and create cases. I looked everywhere for how to properly do this, and I found no answers.

    Read the article

  • Do servlet containers prevent web applications from causing each other interference and how do they do it?

    - by chrisbunney
    I know that a servlet container, such as Apache Tomcat, runs in a single instance of the JVM, which means all of its servlets will run in the same process. I also know that the architecture of the servlet container means each web application exists in its own context, which suggests it is isolated from other web applications. As depicted here: Accepting that each web application is isolated, I would expect that you could create 2 copies of an identical web application, change the names and context paths of each (as well as any other relevant configuration), and run them in parallel without one affecting the other. The answers to this question appear to support this view. However, a colleague disagrees based on their experience of attempting just that. They took a web application and tried to run 2 separate instances (with different names etc) in the same servlet container and experienced issues with the 2 instances conflicting (I'm unable to elaborate more as I wasn't involved in that work). Based on this, they argue that since the web applications run in the same process space, they can't be isolated and things such as class attributes would end up being inadvertently shared. This answer appears to suggest the same thing The two views don't seem to be compatible, so I ask you: Do servlet containers prevent web applications deployed to the same container from conflicting with each other? If yes, How do they do this? If no, Why does interference occur? and finally, Under what circumstances could separate web applications conflict and cause each other interference?, perhaps scenarios involving resources on the file system, native code, or database connections?

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • Using of Templated Helpers in MVC 2.0 : How can use the name of the property that I'm rendering insi

    - by Andrey Tagaew
    Hi. I'm reviewing new features of ASP.NET MVC 2.0. During the review i found really interesting using Templated Helpers. As they described it, the primary reason of using them is to provide common way of how some datatypes should be rendered. Now i want to use this way in my project for DateTime datatype My project was written for the MVC 1.0 so generating of editbox is looking like this: <%= Html.TextBox("BirthDate", Model.BirthDate, new { maxlength = 10, size = 10, @class = "BirthDate-date" })%> <script type="text/javascript"> $(document).ready(function() { $(".BirthDate-date").datepicker({ showOn: 'button', buttonImage: '<%=Url.Content("~/images/i_calendar.gif") %>', buttonImageOnly: true }); }); </script> Now i want to use Template Helper, so i want to have above code once i type next sentence: <%=Html.EditorFor(f=>f.BirthDate) %> According to the manual I create DataTime.ascx partial view inside Shared/EditorTemplates folder. I put there above code and stacked with the problem. How can i pass the name of the property that I'm rendering with template helper? As you can see from my example, i really need it, since I'm using the name of the property to specify data value and parameter name that will be send during the POST requsest. Also, I'm using it to generate class name for JS calendar building. I tried to remove my partial class for template helper and made MVC to generate its default behavior. Here what it generated for me: <input type="text" value="04/29/2010" name="LoanApplicationDays" id="LoanApplicationDays" class="text-box single-line"> As you can see, it used the name of the property for "name" and "id" attributes. This example let me to presume that Template Helper knows about the name of the property. So, there should be some way of how to use it in custom implementation. Thanks for your help!

    Read the article

  • Declaration of arrays before "normal" variables in c?

    - by bjarkef
    Hi We are currently developing an application for a msp430 MCU, and are running into some weird problems. We discovered that declaring arrays withing a scope after declaration of "normal" variables, sometimes causes what seems to be undefined behavior. Like this: foo(int a, int *b); int main(void) { int x = 2; int arr[5]; foo(x, arr); return 0; } foo sometimes is passed a pointer as the second variable, that does not point to the arr array. We verify this by single stepping through the program, and see that the value of the arr variable in the main scope is not the same as the value of the b pointer variable in the foo scope. And no, this is not really reproduceable, we have just observed this behavior once in a while. Changing the example seems to solve the problem, like this: foo(int a, int *b); int main(void) { int arr[5]; int x = 2; foo(x, arr); return 0; } Does anybody have any input or hints as to why we experience this behavior? Or similar experiences? The MSP430 programming guide specifies that code should conform to the ANSI C89 spec. and so I was wondering if it says that arrays has to be declared before non-array variables? Any input on this would be appreciated.

    Read the article

  • Text Parsing - My Parser Skipping commands

    - by The.Anti.9
    I'm trying to parse text-formatting. I want to mark inline code, much like SO does, with backticks (`). The rule is supposed to be that if you want to use a backtick inside of an inline code element, You should use double backticks around the inline code. like this: `` mark inline code with backticks ( ` ) `` My parser seems to skip over the double backticks completely for some reason. Heres the code for the function that does the inline code parsing: private string ParseInlineCode(string input) { for (int i = 0; i < input.Length; i++) { if (input[i] == '`' && input[i - 1] != '\\') { if (input[i + 1] == '`') { string str = ReadToCharacter('`', i + 2, input); while (input[i + str.Length + 2] != '`') { str += ReadToCharacter('`', i + str.Length + 3, input); } string tbr = "``" + str + "``"; str = str.Replace("&", "&amp;"); str = str.Replace("<", "&lt;"); str = str.Replace(">", "&gt;"); input = input.Replace(tbr, "<code>" + str + "</code>"); i += str.Length + 13; } else { string str = ReadToCharacter('`', i + 1, input); input = input.Replace("`" + str + "`", "<code>" + str + "</code>"); i += str.Length + 13; } } } return input; } If I use single backticks around something, it wraps it in the <code> tags correctly.

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >