Search Results

Search found 20963 results on 839 pages for 'jquery grid'.

Page 653/839 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Set DetailsView as selected row of GridView

    - by Nix
    I am afraid this is a brain fart question. But I have searched around and have not been able to find the answer. I am creating a GridView/DetailsView page. I have a grid that displays a bunch of rows, when a row is selected it uses a DetailsView to allow for Insert/Update. My question is what is the best way to link these? I do not want to reach out to the web service again, all the data i need is in the selected grid view row. I basically have 2 separate data sources that share the same "DataObjectTypeName", the first data source retrieves the data, and the other to do the CRUD. What is the best way to transfer the Selected Grid View row to the Details View? Am I going to have to manualy handle the Insert/Update events and call the data source myself? <asp:GridView ID="gvDetails" runat="server" DataKeyNames="ID, Code" DataSourceID="odsSearchData" > <Columns> <asp:BoundField DataField="RowA" HeaderText="A" SortExpression="RowA" /> <asp:BoundField DataField="RowB" HeaderText="B" SortExpression="RowB" /> <asp:BoundField DataField="RowC" HeaderText="C" SortExpression="RowC" /> ....Code... <asp:DetailsView ID="dvDetails" runat="server" DataKeyNames="ID, Code" DataSourceID="odsCRUD" GridLines="None" DefaultMode="Edit" AutoGenerateRows="false" Visible="false" Width="100%"> <Fields> <asp:BoundField DataField="RowA" HeaderText="A" SortExpression="RowA" /> <asp:BoundField DataField="RowB" HeaderText="B" SortExpression="RowB" /> <asp:BoundField DataField="RowC" HeaderText="C" SortExpression="RowC" /> ...

    Read the article

  • In Silverlight, what structures, aside of the ListBox, can be used for binding?

    - by Aidenn
    I need to simply provide the content of a property to a custom User Control in Silverlight. My control is something like this: <UserControl x:Class="SilverlightApplication.Header" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="300" d:DesignHeight="120"> <Grid x:Name="Header_Layout"> <StackPanel x:Name="hiHeaderContent" Width="Auto" Margin="73,8,8,8"> <TextBlock x:Name="User:" Text="{Binding name}" /> </StackPanel> </Grid> I try to use this User Control from another control where I try to pass the parameter "name" to the previous UserControl ("Header"). I don't need to create a "ListBox" as I will only have 1 header, so I try to avoid doing: <ListBox x:Name="HeaderListBox" Grid.Row="0"> <ListBox.ItemTemplate> <DataTemplate> <SilverlightApplication:Header/> </DataTemplate> </ListBox.ItemTemplate> </ListBox> in order to send the "User" account using: HeaderListBox.ItemsSource = name; Is there any other structure I can use instead of the ListBox to pass the parameter just once? It won't be a list, it's just a header... Thank you!

    Read the article

  • JavaScript: supposed to execute functions sequentially, not actually doing so?

    - by AP257
    I'm seeing a lot of answers on StackOverflow that say that JavaScript executes code sequentially, but I can actually see my own JavaScript not doing so. From the following code: function centre_map(lat, lng, zoom_level) { alert('centre_map'); map = new GMap2(document.getElementById('map_canvas')); var latlng = new GLatLng(lat, lng); map.setCenter(latlng, zoom_level); } function add_markers_within_bounds() { alert('add_markers_within_bounds'); // add numerous BLUE markers within map bounds using MarkerClusterer } function add_marker(lat, lng, place_name, grid, county) { alert('add_marker'); // add one ordinary RED Google Maps marker } centre_map('{{lat}}', '{{lng}}', 12); add_markers_within_bounds('{{grid}}', '{{place_name}}'); add_marker('{{lat}}', '{{lng}}', '{{place_name}}', '{{grid}}', '{{county}}'); I get the following sequence of events: 'centre_map' alert 'add_markers_within_bounds' alert 'add_marker' alert individual RED marker appears on map (i.e. add_marker renders) multiple BLUE markers appear on map (i.e. add_markers_within_bounds renders) Why doesn't add_markers_within_bounds complete before add_marker gets under way: and how can I make it do so? I know that one way might be to call add_marker from within add_markers_within_bounds, but for various reasons I'd rather keep it as a separate function.

    Read the article

  • Form doesn't resize smoothly with a timer event

    - by BDotA
    I have a grid control at the bottom of my form and it can be shown or hidden if user wants to show/hide it. So one way was to well use AutoSize of the form and change the Visuble property of that grid to true or false,... But I thought let's make it a little cooler! so I wanted the form to resize a little more slowly, like a garage door! So I dropped a Timer on the form and started increasing the height of the form little by little while the timer ticks... so something like this when user says show/hide the grid: timer1.Enabled = true; timer1.Start(); and something like this on the timer_click event: this.Height = this.Height + 5; if(this.Height -10 > ErrorsGrid.Bottom ) timer1.Stop(); It kind of works but still not perfect. For example it lags at the very beginning, stop a like a second and then start moving it...So now with this idea in mind what alterations do you suggest I should do to make this thing look and work better?

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Upgrading Redmine, activerecord-mysql2-adapter not recognized

    - by David Kaczynski
    For upgrading Redmine from 1.0.1 to 2.1.2, I need to execute the command: rake db:migrate RAILS_ENV=production However, doing so produces the following error: rake aborted! Please install the mysql2 adapter: gem install activerecord-mysql2-adapter (mysql2 is not part of the bundle. Add it to Gemfile.) I have ran gem install activerecord-mysql2-adapter, but I still get the same error when I try to run the rake ... command. How do I get my RoR app to recognize that I have the mysql2 adapter installed already? or Is there something wrong with my activerecord-mysql2-adapter installation? Results of sudo bundle install: Using rake (10.0.0) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.8) Using builder (3.0.0) Using activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.8) Using activeresource (3.2.8) Using coderay (1.0.8) Using fastercsv (1.5.5) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.8) Using jquery-rails (2.0.3) Using metaclass (0.0.1) Using mocha (0.12.3) Using mysql (2.8.1) Using net-ldap (0.3.1) Using pg (0.14.1) Using ruby-openid (2.1.8) Using rack-openid (1.3.1) Using bundler (1.2.1) Using rails (3.2.8) Using rmagick (2.13.1) Using shoulda (2.11.3) Using sqlite3 (1.3.6) Using yard (0.8.3) [32mYour bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.[0m Results of sudo find / -name "*mysql2*": /var/lib/gems/1.8/doc/mysql2-0.3.11 /var/lib/gems/1.8/doc/activerecord-3.2.9/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/em_mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/gems/mysql2-0.3.11 /var/lib/gems/1.8/gems/mysql2-0.3.11/spec/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/mysql2.gemspec /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2.rb /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.c /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.h /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.o /var/lib/gems/1.8/gems/activerecord-3.2.9/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/activerecord-mysql2-adapter.gemspec /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/arel/engines/sql/compilers/mysql2_compiler.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/em_mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-3.2.8/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/cache/mysql2-0.3.11.gem /var/lib/gems/1.8/cache/activerecord-mysql2-adapter-0.0.3.gem /var/lib/gems/1.8/specifications/activerecord-mysql2-adapter-0.0.3.gemspec /var/lib/gems/1.8/specifications/mysql2-0.3.11.gemspec Contents of /usr/share/redmine/Gemfile: source 'http://rubygems.org' gem 'rails', '3.2.8' gem "jquery-rails", "~> 2.0.2" gem "i18n", "~> 0.6.0" gem "coderay", "~> 1.0.6" gem "fastercsv", "~> 1.5.0", :platforms => [:mri_18, :mingw_18, :jruby] gem "builder", "3.0.0" # Optional gem for LDAP authentication group :ldap do gem "net-ldap", "~> 0.3.1" end # Optional gem for OpenID authentication group :openid do gem "ruby-openid", "~> 2.1.4", :require => "openid" gem "rack-openid" end # Optional gem for exporting the gantt to a PNG file, not supported with jruby platforms :mri, :mingw do group :rmagick do # RMagick 2 supports ruby 1.9 # RMagick 1 would be fine for ruby 1.8 but Bundler does not support # different requirements for the same gem on different platforms gem "rmagick", ">= 2.0.0" end end # Database gems platforms :mri, :mingw do group :postgresql do gem "pg", ">= 0.11.0" end group :sqlite do gem "sqlite3" end end platforms :mri_18, :mingw_18 do group :mysql do gem "mysql" end end platforms :mri_19, :mingw_19 do group :mysql do gem "mysql2", "~> 0.3.11" end end platforms :jruby do gem "jruby-openssl" group :mysql do gem "activerecord-jdbcmysql-adapter" end group :postgresql do gem "activerecord-jdbcpostgresql-adapter" end group :sqlite do gem "activerecord-jdbcsqlite3-adapter" end end group :development do gem "rdoc", ">= 2.4.2" gem "yard" end group :test do gem "shoulda", "~> 2.11" # Shoulda does not work nice on Ruby 1.9.3 and seems to need test-unit explicitely. gem "test-unit", :platforms => [:mri_19] gem "mocha", "0.12.3" end local_gemfile = File.join(File.dirname(__FILE__), "Gemfile.local") if File.exists?(local_gemfile) puts "Loading Gemfile.local ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(local_gemfile) end # Load plugins' Gemfiles Dir.glob File.expand_path("../plugins/*/Gemfile", __FILE__) do |file| puts "Loading #{file} ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(file) end Contents of /usr/share/redmine/Gemfile.lock: GEM remote: http://rubygems.org/ specs: actionmailer (3.2.8) actionpack (= 3.2.8) mail (~> 2.4.4) actionpack (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) erubis (~> 2.7.0) journey (~> 1.0.4) rack (~> 1.4.0) rack-cache (~> 1.2) rack-test (~> 0.6.1) sprockets (~> 2.1.3) activemodel (3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) activerecord (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) arel (~> 3.0.2) tzinfo (~> 0.3.29) activeresource (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) activesupport (3.2.8) i18n (~> 0.6) multi_json (~> 1.0) arel (3.0.2) builder (3.0.0) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) railties (>= 3.1.0, < 5.0) thor (~> 0.14) json (1.7.5) mail (2.4.4) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) metaclass (~> 0.0.1) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack (>= 0.4) rack-openid (1.3.1) rack (>= 1.1.0) ruby-openid (>= 2.1.8) rack-ssl (1.3.2) rack rack-test (0.6.2) rack (>= 1.0) rails (3.2.8) actionmailer (= 3.2.8) actionpack (= 3.2.8) activerecord (= 3.2.8) activeresource (= 3.2.8) activesupport (= 3.2.8) bundler (~> 1.0) railties (= 3.2.8) railties (3.2.8) actionpack (= 3.2.8) activesupport (= 3.2.8) rack-ssl (~> 1.3.2) rake (>= 0.8.7) rdoc (~> 3.4) thor (>= 0.14.6, < 2.0) rake (10.0.0) rdoc (3.12) json (~> 1.4) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.1.3) hike (~> 1.2) rack (~> 1.0) tilt (~> 1.1, != 1.3.0) sqlite3 (1.3.6) test-unit (2.5.2) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) polyglot polyglot (>= 0.3.1) tzinfo (0.3.35) yard (0.8.3) PLATFORMS ruby DEPENDENCIES activerecord-jdbcmysql-adapter activerecord-jdbcpostgresql-adapter activerecord-jdbcsqlite3-adapter builder (= 3.0.0) coderay (~> 1.0.6) fastercsv (~> 1.5.0) i18n (~> 0.6.0) jquery-rails (~> 2.0.2) jruby-openssl mocha (= 0.12.3) mysql mysql2 (~> 0.3.11) net-ldap (~> 0.3.1) pg (>= 0.11.0) rack-openid rails (= 3.2.8) rdoc (>= 2.4.2) rmagick (>= 2.0.0) ruby-openid (~> 2.1.4) shoulda (~> 2.11) sqlite3 test-unit yard Results of gem list: actionmailer (3.2.9, 3.2.8) actionpack (3.2.9, 3.2.8) activemodel (3.2.9, 3.2.8) activerecord (3.2.9, 3.2.8) activerecord-mysql2-adapter (0.0.3) activeresource (3.2.9, 3.2.8) activesupport (3.2.9, 3.2.8) arel (3.0.2) builder (3.0.0) bundler (1.2.1) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) json (1.7.5) mail (2.4.4) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-openid (1.3.1) rack-ssl (1.3.2) rack-test (0.6.2) rails (3.2.9, 3.2.8) railties (3.2.9, 3.2.8) rake (10.0.0) rdoc (3.12) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.2.1, 2.1.3) sqlite3 (1.3.6) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) tzinfo (0.3.35) yard (0.8.3) Results of 'bundle show`: Gems included by the bundle: * actionmailer (3.2.8) * actionpack (3.2.8) * activemodel (3.2.8) * activerecord (3.2.8) * activeresource (3.2.8) * activesupport (3.2.8) * arel (3.0.2) * builder (3.0.0) * bundler (1.2.1) * coderay (1.0.8) * erubis (2.7.0) * fastercsv (1.5.5) * hike (1.2.1) * i18n (0.6.1) * journey (1.0.4) * jquery-rails (2.0.3) * json (1.7.5) * mail (2.4.4) * metaclass (0.0.1) * mime-types (1.19) * mocha (0.12.3) * multi_json (1.3.7) * mysql (2.8.1) * net-ldap (0.3.1) * pg (0.14.1) * polyglot (0.3.3) * rack (1.4.1) * rack-cache (1.2) * rack-openid (1.3.1) * rack-ssl (1.3.2) * rack-test (0.6.2) * rails (3.2.8) * railties (3.2.8) * rake (10.0.0) * rdoc (3.12) * rmagick (2.13.1) * ruby-openid (2.1.8) * shoulda (2.11.3) * sprockets (2.1.3) * sqlite3 (1.3.6) * thor (0.16.0) * tilt (1.3.3) * treetop (1.4.12) * tzinfo (0.3.35) * yard (0.8.3)

    Read the article

  • Is it worthwhile to block malicious crawlers via iptables?

    - by EarthMind
    I periodically check my server logs and I notice a lot of crawlers search for the location of phpmyadmin, zencart, roundcube, administrator sections and other sensitive data. Then there are also crawlers under the name "Morfeus Fucking Scanner" or "Morfeus Strikes Again" searching for vulnerabilities in my PHP scripts and crawlers that perform strange (XSS?) GET requests such as: GET /static/)self.html(selector?jQuery( GET /static/]||!jQuery.support.htmlSerialize&&[1, GET /static/);display=elem.css( GET /static/.*. GET /static/);jQuery.removeData(elem, Until now I've always been storing these IPs manually to block them using iptables. But as these requests are only performed a maximum number of times from the same IP, I'm having my doubts if it does provide any advantage security related by blocking them. I'd like to know if it does anyone any good to block these crawlers in the firewall, and if so if there's a (not too complex) way of doing this automatically. And if it's wasted effort, maybe because these requests come from from new IPs after a while, if anyone can elaborate on this and maybe provide suggestion for more efficient ways of denying/restricting malicious crawler access. FYI: I'm also already blocking w00tw00t.at.ISC.SANS.DFind:) crawls using these instructions: http://spamcleaner.org/en/misc/w00tw00t.html

    Read the article

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • WPF Templates error - "Provide value on 'System.Windows.Baml2006.TypeConverterMarkupExtension' threw

    - by jasonk
    I've just started experimenting with WPF templates vs. styles and I'm not sure what I'm doing wrong. The goal below is to alternate the colors of the options in the menu. The code works fine with just the , but when I copy and paste/rename it for the second segment of "MenuChoiceOdd" I get the following error: Provide value on 'System.Windows.Baml2006.TypeConverterMarkupExtension' threw an exception. Sample of the code: <Window x:Class="WpfApplication1.Template_Testing" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Template_Testing" Height="300" Width="300"> <Grid> <Grid.Resources> <ControlTemplate x:Key="MenuChoiceEven"> <Border BorderThickness="1" BorderBrush="#FF4A5D80"> <TextBlock Height="Auto" HorizontalAlignment="Stretch" Margin="0" Width="Auto" FontSize="14" Foreground="SlateGray" TextAlignment="Left" AllowDrop="True" Text="{Binding Path=Content, RelativeSource={RelativeSource TemplatedParent}}"> <TextBlock.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="White" Offset="0" /> <GradientStop Color="#FFC2CCDB" Offset="1" /> </LinearGradientBrush> </TextBlock.Background> </TextBlock> </Border> </ControlTemplate> <ControlTemplate x:Key="MenuChoiceOdd"> <Border BorderThickness="1" BorderBrush="#FF4A5D80"> <TextBlock Height="Auto" HorizontalAlignment="Stretch" Margin="0" Width="Auto" FontSize="14" Foreground="SlateGray" TextAlignment="Left" AllowDrop="True" Text="{Binding Path=Content, RelativeSource={RelativeSource TemplatedParent}}"> <TextBlock.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="White" Offset="0" /> <GradientStop Color="##FFCBCBCB" Offset="1" /> </LinearGradientBrush> </TextBlock.Background> </TextBlock> </Border> </ControlTemplate> </Grid.Resources> <Border BorderBrush="SlateGray" BorderThickness="2" Margin="10" CornerRadius="10" Background="LightSteelBlue" Width="200"> <StackPanel Margin="4"> <TextBlock Height="Auto" HorizontalAlignment="Stretch" Margin="2,2,2,0" Name="MenuHeaderTextBlock" Text="TextBlock" Width="Auto" FontSize="16" Foreground="PaleGoldenrod" TextAlignment="Left" Padding="10" FontWeight="Bold"><TextBlock.Background><LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"><GradientStop Color="LightSlateGray" Offset="0" /><GradientStop Color="DarkSlateGray" Offset="1" /></LinearGradientBrush></TextBlock.Background></TextBlock> <StackPanel Height="Auto" HorizontalAlignment="Stretch" Margin="2,0,2,0" Name="MenuChoicesStackPanel" VerticalAlignment="Top" Width="Auto"> <Button Template="{StaticResource MenuChoiceEven}" Content="Test Even menu element" /> <Button Template="{StaticResource MenuChoiceOdd}" Content="Test odd menu element" /> </StackPanel> </StackPanel> </Border> </Grid> </Window> What am I doing wrong?

    Read the article

  • Silverlight 3 Dynamic DataGrid RowStyle Ignored

    - by antoinne85
    I subclassed the standard DataGrid into SpecialDataGrid so I could override the KeyDown/KeyUp events. Other than that SpecialDataGrid is exactly the same as DataGrid. At run-time I dynamically create a bunch of these SpecialDataGrids. When a user clicks a row in the grid it hightlights, which is fine, but when that grid loses focus, it leaves a residual gray highlight on the last-selected row, which is not fine. I've heavily edited the RowStyle and CellStyle I'm applying to these Grids to more-or-less remove all formatting. I even added a static SpecialDataGrid to the app with test data so I could see if the RowStyle was somehow incorrect, applying the same RowStyle and CellStyle that I'm applying to the dynamically generated one (you'll see it in the code below). What I saw was that the "test grid" showed up exactly as I wanted, and the real grid is ignoring part of the RowStyle! Has anyone run into this issue or have any ideas of how to correct it? Some source and images follow. Creating the SpecialDataGrid: //Set up a datagrid. SpecialDataGrid radio_datagrid = new SpecialDataGrid(); radio_datagrid.ItemsSource = radios; radio_datagrid.AutoGenerateColumns = false; radio_datagrid.HeadersVisibility = DataGridHeadersVisibility.None; radio_datagrid.BorderThickness = new Thickness(0); radio_datagrid.HorizontalAlignment = HorizontalAlignment.Stretch; radio_datagrid.IsReadOnly = true; radio_datagrid.MouseLeftButtonUp += new MouseButtonEventHandler(option_datagrid_MouseLeftButtonUp); radio_datagrid.KeyDown += new KeyEventHandler(radio_datagrid_KeyDown); radio_datagrid.KeyUp += new KeyEventHandler(radio_datagrid_KeyUp); //Radio column. DataGridTemplateColumn temp_col = new DataGridTemplateColumn(); temp_col.CellTemplate = (DataTemplate)this.Resources["RadioColumnTemplate"]; temp_col.Width = new DataGridLength(20); radio_datagrid.Columns.Add(temp_col); //Description column. DataGridTextColumn txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optionlabel"); txt_col.Width = new DataGridLength(350); radio_datagrid.Columns.Add(txt_col); //Product code column. txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optioncode"); txt_col.Width = new DataGridLength(80); radio_datagrid.Columns.Add(txt_col); //Price column. txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optionprice"); txt_col.Width = new DataGridLength(80); radio_datagrid.Columns.Add(txt_col); //View column. temp_col = new DataGridTemplateColumn(); temp_col.CellTemplate = (DataTemplate)this.Resources["HyperlinkButtonColumnTemplate"]; temp_col.Width = new DataGridLength(30); radio_datagrid.Columns.Add(temp_col); radio_datagrid.RowStyle = (Style)this.Resources["StyleDataGridRowNoAlternating"]; radio_datagrid.CellStyle = (Style)this.Resources["Style_DataGridCell_NoHighlight"]; Example Image: The lower DataGrid appears that way regardless of what you do to it. No highlighting of any sort and certainly no residual highlights. Any idea what's keeping this from being applied to the first?

    Read the article

  • Treeview - Hierarchical Data Template - Binding does not update on source change?

    - by ClearsTheScreen
    Greetings! I ran into this problem in my project (Silverlight 3 with C#): I have a TreeView which is data bound to, well, a tree. This TreeView has a HierarchicalDataTamplate in a resource dictionary, that defines various controls. Now I want to hide (Visibility.Collapse) some items depending on wether a node has children or not. Other items shall be visible under the same condition. It works like charm when I first bind the source tree to the TreeView, but when I change the source tree, the visibility in the treeview does not change. XAML - page: <controls:TreeView x:Name="SankeyTreeView" ItemContainerStyle="{StaticResource expandedTreeViewItemStyle}" ItemTemplate="{StaticResource SankeyTreeTemplate}"> <controls:TreeViewItem IsExpanded="True"> <controls:TreeViewItem.HeaderTemplate> <DataTemplate> <TextBlock Text="This is just for loading and will be replaced directly after the data becomes available..."/> </DataTemplate> </controls:TreeViewItem.HeaderTemplate> </controls:TreeViewItem> </controls:TreeView> XAML - ResourceDictionary <!-- Each node in the tree is structurally identical, hence only one Hierarchical Data Template that'll use itself on the children. --> <Data:HierarchicalDataTemplate x:Key="SankeyTreeTemplate" ItemsSource="{Binding Children}"> <Grid Height="24"> <TextBlock x:Name="TextBlockName" Text="{Binding Path=Value.name, Mode=TwoWay}" VerticalAlignment="Center" Foreground="Black"/> <TextBox x:Name="TextBoxFlow" Text="{Binding Path=Value.flow, Mode=TwoWay}" Grid.Column="1" Visibility="{Binding Children, Converter={StaticResource BoxConverter}, ConverterParameter=\{box\}}"/> <TextBlock x:Name="TextBlockThroughput" Text="{Binding Path=Value.throughput, Mode=TwoWay}" Grid.Column="1" Visibility="{Binding Children, Converter={StaticResource BoxConverter}, ConverterParameter=\{block\}}"/> <Button x:Name="ButtonAddNode"/> <Button x:Name="ButtonDeleteNode"/> <Button x:Name="ButtonEditNode"/> </Grid> </Data:HierarchicalDataTemplate> Now, as you can see, the TextBoxFlow and the TextBlockThroughput share the same space. What I aim at: The "Throughput" value of a node is how much of something 'flows' through this node from its children. It can't be changed directly, so I want to display a text block. Only leaf nodes have a TextBox to let someone enter the 'flow' that is generated in this leaf node. (I.E.: Node.Throughput = Node.Flow + Sum(Children.Throughput), where Node.Flow = 0 for each non-leaf.) What the BoxConverter (silly name -.-) does: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if ((value as NodeList<TreeItem>).Count > 1) // Node has Children? { if ((parameter as String) == "{box}") { return Visibility.Collapsed; } else ((parameter as String) == "{block}") { return Visibility.Visible; } } else { /* * As above, just with Collapsed and Visible switched */ } } The structure of the tree that is bound to the TreeView is essentially stolen from Dan Vanderboom (a bit too much to dump the whole code here), except that I here of course use an ObservableCollection for the children and the value items implement INotifyPropertyChanged. I would be very grateful if someone could explain to me, why inserting items into the underlying tree does not update the visibility for box and block. Thank you in advance!

    Read the article

  • How do I use Sketchflow sample data for a ListBoxItem Template at design time?

    - by Boris Nikolaevich
    I am using Expression Blend 4 and Visual Studio 2010 to create a Sketchflow prototype. I have a Sample Data collection and a ListBox that is bound to it. This displays as I would expect both at design time and at run time. However, the ListBoxItem template it just complex enough that I wanted to pull it out into its own XAML file. Even though the items still render as expected in the main ListBox where the template is used, when I open the template itself, all of the databound controls are empty. If I add a DataContext to the template, I can see and work with the populated objects while in the template, but then that local DataContext overrides the DataContext set on the listbox. A bit of code will illustrate. Start by creating a Sketchflow project (I am using Silverlight, but it should work the same for WPF), then add a project data source called SampleDataSource. Add a collection called ListData, with a single String property called Title. Here is the (scaled down) code for the main Sketchflow screen, which we'll call Main.xaml: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:DemoScreens" mc:Ignorable="d" x:Class="DemoScreens.Main" Width="800" Height="600"> <UserControl.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="ProjectDataSources.xaml"/> </ResourceDictionary.MergedDictionaries> <DataTemplate x:Key="ListBoxItemTemplate"> <local:DemoListBoxItemTemplate d:IsPrototypingComposition="True" Margin="0,0,5,0" Width="748"/> </DataTemplate> </ResourceDictionary> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="#5c87b2" DataContext="{Binding Source={StaticResource SampleDataSource}}"> <ListBox Background="White" x:Name="DemoList" Style="{StaticResource ListBox-Sketch}" Margin="20,100,20,20" ItemTemplate="{StaticResource ListBoxItemTemplate}" ItemsSource="{Binding ListData}" ScrollViewer.HorizontalScrollBarVisibility="Disabled"/> </Grid> </UserControl> You can see that it references the DemoListBoxItemTemplate, which is defined in its own DemoListBoxItemTemplate.xaml: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:DemoScreens" mc:Ignorable="d" x:Class="DemoScreens.DemoListBoxItemTemplate"> <Grid x:Name="LayoutRoot"> <TextBlock Text="{Binding Title}" Style="{StaticResource BasicTextBlock-Sketch}" Width="150"/> </Grid> </UserControl> Obviously, this is way simpler than my actual listbox, but it should be enough to illustrate my problem. When you open Main.xaml in the Expression designer, the list box is populated with sample data. But when you open DemoListBoxItemTemplate.xaml, there is no data context and therefore no data to display—which makes it more difficult to identify controls visually. How can I have sample data displayed when I am working with the template, while still allowing the larger set of sample data to be used for the ListBox itself?

    Read the article

  • TwoWay Binding With ItemsControl

    - by Andrew
    I'm trying to write a user control that has an ItemsControl, the ItemsTemplate of which contains a TextBox that will allow for TwoWay binding. However, I must be making a mistake somewhere in my code, because the binding only appears to work as if Mode=OneWay. This is a pretty simplified excerpt from my project, but it still contains the problem: <UserControl x:Class="ItemsControlTest.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="300"> <Grid> <StackPanel> <ItemsControl ItemsSource="{Binding Path=.}" x:Name="myItemsControl"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBox Text="{Binding Mode=TwoWay, UpdateSourceTrigger=LostFocus, Path=.}" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <Button Click="Button_Click" Content="Click Here To Change Focus From ItemsControl" /> </StackPanel> </Grid> </UserControl> Here's the code behind for the above control: using System; using System.Windows; using System.Windows.Controls; using System.Collections.ObjectModel; namespace ItemsControlTest { /// <summary> /// Interaction logic for UserControl1.xaml /// </summary> public partial class UserControl1 : UserControl { public ObservableCollection<string> MyCollection { get { return (ObservableCollection<string>)GetValue(MyCollectionProperty); } set { SetValue(MyCollectionProperty, value); } } // Using a DependencyProperty as the backing store for MyCollection. This enables animation, styling, binding, etc... public static readonly DependencyProperty MyCollectionProperty = DependencyProperty.Register("MyCollection", typeof(ObservableCollection<string>), typeof(UserControl1), new UIPropertyMetadata(new ObservableCollection<string>())); public UserControl1() { for (int i = 0; i < 6; i++) MyCollection.Add("String " + i.ToString()); InitializeComponent(); myItemsControl.DataContext = this.MyCollection; } private void Button_Click(object sender, RoutedEventArgs e) { // Insert a string after the third element of MyCollection MyCollection.Insert(3, "Inserted Item"); // Display contents of MyCollection in a MessageBox string str = ""; foreach (string s in MyCollection) str += s + Environment.NewLine; MessageBox.Show(str); } } } And finally, here's the xaml for the main window: <Window x:Class="ItemsControlTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:src="clr-namespace:ItemsControlTest" Title="Window1" Height="300" Width="300"> <Grid> <src:UserControl1 /> </Grid> </Window> Well, that's everything. I'm not sure why editing the TextBox.Text properties in the window does not seem to update the source property for the binding in the code behind, namely MyCollection. Clicking on the button pretty much causes the problem to stare me in the face;) Please help me understand where I'm going wrong. Thanx! Andrew

    Read the article

  • In WPF, Selecting ItemContainerStyle based on data bound content

    - by Bart Roozendaal
    In #WPF you have ItemTemplateSelectors. But, can you also select an ItemContainerStyle based on the datatype of a bound object? I am databinding a scatterview. I want to set some properties of the generated ScatterViewItems based on the object in their DataContext. A mechanism similar to ItemTemplateSelector for styles would be great. Is that at all possible? I am now binding to properties in the objects that I am displaying to get the effect, but that feels like overhead and too complex (and most importantly, something that our XU designers can't do by themselves). This is the XAML that I am using now. Your help is greatly appreciated. <s:ScatterView x:Name="topicsViewer"> <s:ScatterView.ItemTemplateSelector> <local:TopicViewerDataTemplateSelector> <DataTemplate DataType="{x:Type mvc:S7VideoTopic}"> <Grid> <ContentPresenter Content="{Binding MediaElement}" /> <s:SurfaceButton Visibility="{Binding MailToVisible}" x:Name="mailto" Tag="{Binding Titel}" Click="mailto_Click" HorizontalAlignment="Right" VerticalAlignment="Top" Background="Transparent" Width="62" Height="36"> <Image Source="/Resources/MailTo.png" /> </s:SurfaceButton> <StackPanel Orientation="Horizontal" VerticalAlignment="Bottom" HorizontalAlignment="Center" Height="32"> <s:SurfaceButton Tag="{Binding MediaElement}" x:Name="btnPlay" Click="btnPlay_Click"> <Image Source="/Resources/control_play.png" /> </s:SurfaceButton> <s:SurfaceButton Tag="{Binding MediaElement}" x:Name="btnPause" Click="btnPause_Click"> <Image Source="/Resources/control_pause.png" /> </s:SurfaceButton> <s:SurfaceButton Tag="{Binding MediaElement}" x:Name="btnStop" Click="btnStop_Click"> <Image Source="/Resources/control_stop.png" /> </s:SurfaceButton> </StackPanel> </Grid> </DataTemplate> <DataTemplate DataType="{x:Type mvc:S7ImageTopic}"> <Grid> <ContentPresenter Content="{Binding Resource}" /> <s:SurfaceButton Visibility="{Binding MailToVisible}" x:Name="mailto" Tag="{Binding Titel}" Click="mailto_Click" HorizontalAlignment="Right" VerticalAlignment="Top" Background="Transparent" Width="62" Height="36"> <Image Source="/Resources/MailTo.png" /> </s:SurfaceButton> </Grid> </DataTemplate> <DataTemplate DataType="{x:Type local:Kassa}"> <ContentPresenter Content="{Binding}" Width="300" Height="355" /> </DataTemplate> </local:TopicViewerDataTemplateSelector> </s:ScatterView.ItemTemplateSelector> <s:ScatterView.ItemContainerStyle> <Style TargetType="s:ScatterViewItem"> <Setter Property="MinWidth" Value="200" /> <Setter Property="MinHeight" Value="150" /> <Setter Property="MaxWidth" Value="800" /> <Setter Property="MaxHeight" Value="700" /> <Setter Property="Width" Value="{Binding DefaultWidth}" /> <Setter Property="Height" Value="{Binding DefaultHeight}" /> <Setter Property="s:ScatterViewItem.CanMove" Value="{Binding CanMove}" /> <Setter Property="s:ScatterViewItem.CanScale" Value="{Binding CanScale}" /> <Setter Property="s:ScatterViewItem.CanRotate" Value="{Binding CanRotate}" /> <Setter Property="Background" Value="Transparent" /> </Style> </s:ScatterView.ItemContainerStyle> </s:ScatterView> Bart Roozendaal, Sevensteps

    Read the article

  • Dependency Properties and Data Context in Silverlight 3

    - by Noam
    Hello, I am working with Silverlight 3 beta, and am having an issue. I have a page that has a user control that I worte on it. The user control has a dependency property on it. If the user control does not define a data context (hence using the parent's data context), all works well. But if the user control has its own data context, the dependency property's OnPropertyChanged method never gets called. Here is a sample: My Main Page: <UserControl x:Class="TestDepProp.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:app="clr-namespace:TestDepProp" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="400" Height="100"> <Grid x:Name="LayoutRoot" Background="White"> <Border BorderBrush="Blue" BorderThickness="3" CornerRadius="3"> <StackPanel Orientation="Horizontal"> <StackPanel Orientation="Vertical"> <TextBlock Text="Enter text here:" /> <TextBox x:Name="entryBlock" Text="{Binding Data, Mode=TwoWay}"/> <Button Content="Go!" Click="Button_Click" /> <TextBlock Text="{Binding Data}" /> </StackPanel> <Border BorderBrush="Blue" BorderThickness="3" CornerRadius="3" Margin="5"> <app:TestControl PropOnControl="{Binding Data}" /> </Border> </StackPanel> </Border> </Grid> </UserControl> Main Page code: using System.Windows; using System.Windows.Controls; namespace TestDepProp { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); MainPageData data = new MainPageData(); this.DataContext = data; } private void Button_Click(object sender, RoutedEventArgs e) { int i = 1; i++; } } } Main Page's data context: using System.ComponentModel; namespace TestDepProp { public class MainPageData:INotifyPropertyChanged { string _data; public string Data { get { return _data; } set { _data = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Data")); } } public MainPageData() { Data = "Initial Value"; } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; #endregion } } Control XAML: <UserControl x:Class="TestDepProp.TestControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:app="clr-namespace:TestDepProp" > <Grid x:Name="LayoutRoot" Background="White"> <StackPanel Orientation="Vertical" Margin="10" > <TextBlock Text="This should change:" /> <TextBlock x:Name="ControlValue" Text="Not Set" /> </StackPanel> </Grid> </UserControl> Contol code: using System.Windows; using System.Windows.Controls; namespace TestDepProp { public partial class TestControl : UserControl { public TestControl() { InitializeComponent(); // Comment out next line for DP to work DataContext = new MyDataContext(); } #region PropOnControl Dependency Property public string PropOnControl { get { return (string)GetValue(PropOnControlProperty); } set { SetValue(PropOnControlProperty, value); } } public static readonly DependencyProperty PropOnControlProperty = DependencyProperty.Register("PropOnControl", typeof(string), typeof(TestControl), new PropertyMetadata(OnPropOnControlPropertyChanged)); private static void OnPropOnControlPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { TestControl _TestControl = d as TestControl; if (_TestControl != null) { _TestControl.ControlValue.Text = e.NewValue.ToString(); } } #endregion PropOnControl Dependency Property } } Control's data context: using System.ComponentModel; namespace TestDepProp { public class MyDataContext : INotifyPropertyChanged { #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; #endregion } } To try it out, type something in the text box, and hit the Go button. Comment out the data context in the controls code to see that it starts to work. Hope someone has an idea as to what is going on.

    Read the article

  • how to access generated groupname for asp radiobutton

    - by rap-uvic
    Hi, I need to access radiobutton groupname in my jquery. However, groupname that's rendered for an asp radiobutton is kind of different. Example: <asp:RadioButton runat="server" GroupName="payment" ID="creditcard" Checked="true" value="creditcard" /> will generate: <input type="radio" checked="checked" value="creditcard" name="ctl00$ContentPlaceHolder1$payment" id="ctl00_ContentPlaceHolder1_creditcard"> I can't work with <%=creditcard.GroupName% in jquery. Is there a way I can get the generated groupname or name for it?

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Algorithms or OO stuff or new technology

    - by Prashant
    I am trying to learn new stuff about jquery, html, asp .net mvc. I see two school of thoughts - Those who use oo concepts a lot and stress on more object oriented approach Those who rely heavily on algorithms and say a particular problem should take o(n) etc. I am not sure where to spend more time ? . Should I spend more time learning OO stuff or learn new stuff like jquery etc or learn travelling sales man algorithm etc ?

    Read the article

  • Iframe Height

    - by Saravanan
    I am using Jquery to control the height of Iframe. jQuery("iframe",top.document).contents().height(); It's working for Increment of Iframe. When Iframe height decrease its not working.It returns the old value only. Anbody know why this is happening?

    Read the article

  • How can I inject Javascript (including Prototype.js) in other sites without cluttering the global na

    - by Daniel Magliola
    I'm currently on a project that is a big site that uses the Prototype library, and there is already a humongous amount of Javascript code. We're now working on a piece of code that will get "injected" into other people's sites (picture people adding a <script> tag in their sites) which will then run our code and add a bunch of DOM elements and functionality to their site. This will have new pieces of code, and will also reuse a lot of the code that we use on our main site. The problem I have is that it's of course not cool to just add a <script> that will include Prototype in people's pages. If we do that in a page that's already using ANY framework, we're guaranteed to screw everything up. jQuery gives us the option to "rename" the $ object, so it could handle this situation decently, except obviously for the fact that we're not using jQuery, so we'd have to migrate everything. Right now i'm contemplating a number of ugly choices, and I'm not sure what's best... Rewrite everything to use jQuery, with a renamed $ object everywhere. Creating a "new" Prototype library with only the subset we'd be using in "injected" code, and renaming $ to something else. Then again I'd have to adapt the parts of my code that would be shared somehow. Not using a library at all in injected code, to keep it as clean as possible, and rewriting the shared code to use no library at all. This would obviously degenerate into us creating our own frankenstein of a library, which is probably the worst case scenario ever. I'm wondering what you guys think I could do, and also whether there's some magic option that would solve all my problems... For example, do you think I could use something like Caja / Cajita to sandbox my own code and isolate it from the rest of the site, and have Prototype inside of there? Or am I completely missing the point with that? I also read once about a technique for bookmarklets, were you add your code like this: (function() { /* your code */ })(); And then your code is all inside your anonymous function and you haven't touched the global namespace at all. Do you think I could make one file containing: (function() { /* Full Code of the Prototype file here */ /* All my code that will run in the "other" site */ InitializeStuff_CreateDOMElements_AttachEventHandlers(); })(); Would that work? Would it accomplish the objective of not cluttering the global namespace, and not killing the functionality on a site that uses jQuery, for example? Or is Prototype too complex somehow to isolate it like that? (NOTE: I think I know that that would create closures everywhere and that's slower, but I don't care too much about performance, my code is not doing anything that complex)

    Read the article

  • ajax and jsp integration

    - by kawtousse
    I want to capture the responseXML that i have built in my jsp. What should I do. after that i will transform it in html. I know this is annoying and we could do it with a framework or a library like jquery but i realize it with ajax. Also i have problems with jquery and jsp\servlet since i must use a JSON SERVICE. Why it seems to me that is so complicated.

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >