Search Results

Search found 30218 results on 1209 pages for 'edit in place'.

Page 85/1209 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • problem with uninitialized constant

    - by VinTem
    Hi, I have the following controller class ActiveUsersController < ApplicationController def edit end end And my routes.rb is like this: map.resources :active_users When I try to access the controller using the url http://localhost:3000/active_users/COo8e45RqQAHr6CqSCoI/edit I got the following error: NameError in Active usersController#edit uninitialized constant ActiveUsersController RAILS_ROOT: /Users/vintem/Documents/Projetos/Pessoal/bugfreela Application Trace | Framework Trace | Full Trace /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:in load_missing_constant' /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:inconst_missing' /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:in const_missing' /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:361:inconstantize' /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:360:in each' /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:360:inconstantize' /Users/vintem/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/string/inflections.rb:162:in constantize' /Users/vintem/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:443:inrecognize' /Users/vintem/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:436:in `call' Can anyone help me? Thanks

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • What are people's opinions vis-a-vis my choice of authorization plugins?

    - by brad
    I'm slowly but surely putting together my first rails app (first web-app of any kind in fact - I'm not really a programmer) and it's time to set up a user registration/login system. The nature of my app is such that each user will be completely separated from each other user (except for admin roles). When users log in they will have their own unique index page looking at only their data which they and no-one else can ever see or edit. However, I may later want to add a role for a user to be able to view and edit several other user's data (e.g. a group of users may want to allow their secretary to access and edit their data but their secretary would not need any data of their own). My plan is to use authlogic to create the login system and declarative authorization to control permissions but before I embark on this fairly major and crucial task I thought I would canvas a few opinions as to whether this combo was appropriate for the tasks I envisage or whether there would be a better/simpler/faster/cheaper/awesomer option.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Localization with separate Language folders within Views

    - by Adrian
    I'm trying to have specific folders for each language in Views. (I know this isn't the best way of doing it but it has to be this way for now) e.g. /Views/EN/User/Edit.aspx /Views/US/User/Edit.aspx These would both use the same controller and model but have different Views for each language. In my Global.asax.cs I have routes.MapRoute( "Default", // Route name "{language}/{controller}/{action}/{id}", // URL with parameters new { language = "en", controller = "Logon", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { language = @"en|us" } // validation ); This works ok but always points to the same View. If I put the path to the Lanagugage folder it works return View("~/Views/EN/User/Edit.aspx"); but clearly this isn't a very nice way to do it. Is there anyway to get MVC to look in the correct language folder? Thanks and again I know this isn't the best way of doing Localization but I can't use resource files.

    Read the article

  • How to fire an event in the code behind when a div's visibility changed?

    - by Vibin Jith
    As part of my web project ,I have designed a div tag like a window form as Shown in the figure.I just want to fill details in the textbox when the user clicks the edit label.The div is Invisible at first time. when the user clicks on edit label ,the form-div get fadein(visible). During this time an event should fired in the code behind. But I am not getting any events in the code behind like visisbility changed or some thing like that. Where can i get this event. Simply i want to display appropriate company name in the textbox in the div , when the user clicks the edit label in each row.

    Read the article

  • PHP multiuser login class or script

    - by FFish
    I am looking for a simple but secure login script with mySQL PHP: sessions, MD5 that I can use with my exsisting database. Cookies to store password + password recovery by email. Change login/pass. I do not need registering, I register the user myself with temp login/pass. table agents agent1 agent2 table albums album1, owner: agent1 album2, owner: agent1 album3, owner: agent2 ... login.php agent1 logs in and has access to his albums: - album1 - album2 agent1 can edit his albums: edit.php?ref=album1 but NOT edit.php?ref=album3 by changing the ?ref variable

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Simulated click on "add more value" button of multi value cck field causes whole content form to sub

    - by ninja
    Hi I have a multi value cck field in my cck content type. I want to simulate click on "add another item" using jquery. which is like $('#edit-field-supp-quan-field-supp-quan-add-more').trigger('click'); but it causes whole content form to submit instead of adding extra multi value cck field. Manuall clicks are working perfectly. Can anyone tell me why behavior of manual clicks and simulated clicks are different. thanks ----Update ---- This is the code I was using:- $('#edit-field-freightamount-0-value').click(function(){ alert('hello'); $('#edit-field-supp-quan-field-supp-quan-add-more').trigger('click'); //$('.form-submit ahah-processed').trigger('click'); }); I actually intended to call this from inside some other function but I just wanted to test it before that . So i wrote this dummy function which is like if i click inside a texrfield it should simulate a click on "add more item" How do we prevent default action of click?

    Read the article

  • Separate functionality depending on Role in ASP.NET MVC

    - by Andrew Bullock
    I'm looking for an elegant pattern to solve this problem: I have several user roles in my system, and for many of my controller actions, I need to deal with slightly different data. For example, take /Users/Edit/1 This allows a Moderator to edit a users email address, but Administrators to edit a user's email address and password. I'd like a design for separating the two different bits of action code for the GET and the POST. Solutions I've come up with so far are: Switch inside each method, however this doesn't really help when i want different model arguments on the POST :( Custom controller factory which chooses a UsersController_ForModerators and UsersController_ForAdmins instead of just UsersController from the controller name and current user role Custom action invoker which choose the Edit_ForModerators method in a similar way to above Have an IUsersController and register a different implementation of it in my IoC container as a named instance based on Role Build an implementation of the controller at runtime using Castle DynamicProxy and manipulate the methods to those from role-based implementations Im preferring the named IoC instance route atm as it means all my urls/routing will work seamlessly. Ideas? Suggestions?

    Read the article

  • Pure HTML + JavaScript client side templating

    - by Dev er dev
    I want to have achieve something similar to Java Tiles framework using only client side technologies (no server side includes). I would like to have one page, eg layout.html which will contain layout definition. Content placeholder in that page would be empty #content div tag. I would like to have different content injected on that page based on url. Something like layout.html?content=main or layout.html?content=edit will display page with content replaced with main.html or edit.html. The goal is to avoid duplicating code, even for layout, and to compose pages without server-side templating. What approach would you suggest? EDIT: I don't need a full templating library, just a way to compose a pages, similar for what tiles do.

    Read the article

  • Update User Info with restful_authentication plugin in Rails?

    - by benoror
    Hi people, I want to give the users the ability to change their account info with restful_authentication plugin in rails. I added this two methods to my users controller: def edit @user = User.find(params[:id]) end def update @user = User.find(params[:id]) # Only update password when necessary params[:user].delete(:password) if pàrams[:user][:password].blank? respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(@user) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end Also, I copied new.html.erb to edit.html.erb. Considering that resources are already defined in routes.rb I was expecting it to work easily, bute somehow when I click the save button it calls the create method, instead of update, using a POST http request. Inmediatly after that it autocatically log out form the session. Any ideas?

    Read the article

  • Bulk stop watching tickets on Lighthouse?

    - by T.J. Crowder
    Via the Lighthouse user interface, is there any way to bulk stop watching tickets? I have more than 150 tickets on a project I want to stop watching, and would just as soon not go into each and every one of them. I thought the bulk edit command might work, but there doesn't appear to be a watch keyword in the bulk edit stuff (which is fair enough, I'm not really editing the tickets). When I go to my profile, I can subscribe or unsubscribe to an entire project, but I'm not seeing a way to do this at the (bulk) ticket level. Looking at a list of the tickets I'm watching, I'm not seeing a way to do anything to all of them (other than the bulk edit command of course). Is there something I'm missing?

    Read the article

  • Scroll bar not maintaining its position in ListView (ASP.net)

    - by AJ
    Hi I have a listview inside a DIV which shows the scroll bars. At one time, let's say, 10 rows can be seen. I scroll down and click Edit on 25 row. To my surprise, the scroll goes to the first row (although if I go back to 25th row, the row is in edit mode) My issue, how I can make sure that the scroll bar maintains its position to 25th row after clicking on Edit button? Please advise. Thanks AJ .StopScroll1{ Z-INDEX: 20; POSITION: relative;left:-1px; TOP: expression(document.getElementById("divGrid1").scrollTop); }        ..... .....

    Read the article

  • Modify action names for route collections in Symfony?

    - by James Skidmore
    When creating an sfPropelRouteCollection, how can I edit the action names that the collection will generate? For example: # Routing for "product" CRUD product: class: sfPropelRouteCollection options: model: Product module: product actions: [new, create, edit, update, delete] How can I change the actual action that is called for any of the new/create/edit/update/delete methods? I'd like for them to call "ajaxNew," "ajaxCreate," etc. so the URL would look something like "product/ajaxNew", or the action for "update" would be "ajaxUpdate". Let me know if I need to clarify further. Thanks.

    Read the article

  • Left with extra UITableViewCell after re-ordering

    - by Mark F
    After going into Edit mode, moving a cell, and leaving edit mode, i am left with one extra cell sitting on top of its duplicate cell while still in edit mode. The problem has to be somewhere in here: - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)sourceIndexPath toIndexPath:(NSIndexPath *)destinationIndexPath { NSMutableArray *array = [[fetchedResultsController fetchedObjects] mutableCopy]; id objectToMove = [[array objectAtIndex:sourceIndexPath.row] retain]; [array removeObjectAtIndex:sourceIndexPath.row]; [array insertObject:objectToMove atIndex:destinationIndexPath.row]; [objectToMove release]; for (int i=0; i<[array count]; i++) { [(NSManagedObject *)[array objectAtIndex:i] setValue:[NSNumber numberWithInt:i] forKey:@"userOrder"]; } [array release]; } Any guidance greatly appreciated!

    Read the article

  • snipMate only working on empty buffer?

    - by JesseBuesking
    I'm attempting to use snipMate with sql files, however it doesn't seem to work when editing an existing file. If I create a new empty buffer (no file; e.g. launch gvim from the start menu), and set the filetype to sql (:set ft=sql), it works. However, if I then try to open a sql file (e.g. :e c:\blah.sql) and edit it, snipMate no longer works. What gives!? Setup: gvim vim 7.3 Windows 7 snipMate 0.84 Also, I do in fact have filetype plugin on in my .vimrc file. edit Apparently if I open an empty buffer, set the filetype to sql, then save to file using w c:\blah.sql, I now have a sql file open AND snipMate continues to work. edit Here's a gist of my current .vimrc in case it helps: https://gist.github.com/3946877

    Read the article

  • New hire expectations... (Am I being unreasonable?)

    - by user295841
    I work for a very small custom software shop. We currently consist me and my boss. My boss is an old FoxPro DOS developer and OOP makes him uncomfortable. He is planning on taking a back seat in the next few years to hopefully enjoy a “partial retirement”. I will be taking over the day to day operations and we are now desperately looking for more help. We tried Monster.com, Dice.com, and others a few years ago when we started our search. We had no success. We have tried outsourcing overseas (total disaster), hiring kids right out of college (mostly a disaster but that’s where I came from), interns (good for them, not so good for us) and hiring laid off “experienced” developers (there was a reason they were laid off). I have heard hiring practices discussed on podcasts, blogs, etc... and have tried a few. The “Fizz Buzz” test was a good one. One kid looked physically ill before he finally gave up. I think my problem is that I have grown so much as a developer since I started here that I now have a high standard. I hear/read very intelligent people podcasts and blogs and I know that there are lots of people out there that can do the job. I don’t want to settle for less than a “good” developer. Perhaps my expectations are unreasonable. I expect any good developer (entry level or experienced) to be billable (at least paying their own wage) in under one month. I expect any good developer to be able to be productive (at least dangerous) in any language or technology with only a few days of research/training. I expect any good developer to be able to take a project from initial customer request to completion with little or no help from others. Am I being unreasonable? What constitutes a valuable developer? What should be expected of an entry level developer? What should be expected of an experienced developer? I realize that everyone is different but there has to be some sort of expectations standard, right? I have been giving the test project below to potential canidates to weed them out. Good idea? Too much? Too little? Please let me know what you think. Thanks. Project ID: T00001 Description: Order Entry System Deadline: 1 Week Scope The scope of this project is to develop a fully function order entry system. Screen/Form design must be user friendly and promote efficient data entry and modification. User experience (Navigation, Screen/Form layouts, Look and Feel…) is at the developer’s discretion. System may be developed using any technologies that conform to the technical and system requirements. Deliverables Complete source code Database setup instructions (Scripts or restorable backup) Application installation instructions (Installer or installation procedure) Any necessary documentation Technical Requirements Server Platform – Windows XP / Windows Server 2003 / SBS Client Platform – Windows XP Web Browser (If applicable) – IE 8 Database – At developer’s discretion (Must be a relational SQL database.) Language – At developer’s discretion All data must be normalized. (+) All data must maintain referential integrity. (++) All data must be indexed for optimal performance. System must handle concurrency. System Requirements Customer Maintenance Customer records must have unique ID. Customer data will include Name, Address, Phone, etc. User must be able to perform all CRUD (Create, Read, Update, and Delete) operations on the Customer table. User must be able to enter a specific Customer ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a customer to edit. Validation must be performed prior to database commit. Customer record cannot be deleted if the customer has an order in the system. (++) Inventory Maintenance Part records must have unique ID. Part data will include Description, Price, UOM (Unit of Measure), etc. User must be able to perform all CRUD operations on the part table. User must be able to enter a specific Part ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a part to edit. Validation must be performed prior to database commit. Part record cannot be deleted if the part has been used in an order. (++) Order Entry Order records must have a unique auto-incrementing key (Order Number). Order data must be split into a header/detail structure. (+) Order can contain an infinite number of detail records. Order header data will include Order Number, Customer ID (++), Order Date, Order Status (Open/Closed), etc. Order detail data will include Part Number (++), Quantity, Price, etc. User must be able to perform all CRUD operations on the order tables. User must be able to enter a specific Order Number to edit. User must be able to pull up a sortable/queryable search grid/utility to find an order to edit. User must be able to print an order form from within the order entry form. Validation must be performed prior to database commit. Reports Customer Listing – All Customers in the system. Inventory Listing – All parts in the system. Open Order Listing – All open orders in system. Customer Order Listing – All orders for specific customer. All reports must include sorts and filter functions where applicable. Ex. Customer Listing by range of Customer IDs. Open Order Listing by date range.

    Read the article

  • whats wrong with this regular expression c#?

    - by Greezer
    I runned into a problem with my regular expressions, I'm using regular expressions for obtaining data from the string below: "# DO NOT EDIT THIS MAIL BY HAND #\r\n\r\n[Feedback]:hallo\r\n\r\n# DO NOT EDIT THIS MAIL BY HAND #\r\n\r\n" So far is got it working with: String sFeedback = Regex.Match(Message, @"\[Feedback\]\:(?<string>.*?)\r\n\r\t\n# DO NOT EDIT THIS MAIL BY HAND #").Groups[1].Value; This works except if the header is changed, therefore I want the regex to read from [feedback]: to the end of the string. (symbols, ascii, everything..) I tried: \[Feedback]:(?<string>.*?)$ Above regular expression does work in some regular expression builders online but in my c# code its not working and returns a empty string. can someone help me with this regular expression? thanks in advance

    Read the article

  • How do you pop a modal view and the previous navigation controller view at once?

    - by mr_kurrupt
    I haven't found anything similar to this on google or stack overflow... What I'm trying to do is pop a modal view and the previous view at the same time. For example, look at the calendars app. When you are on the 'Edit' screen and select 'Delete Event', you are taken back to the calendar view. The 'Edit' screen, which was presented modally is popped as well as the the 'Event' screen (where the user is just viewing the calendar event). The problem I am having is that I know how to pop a modal view...but from the same UIViewController subclass ('Edit' screen in this example), how do I pop a view that isn't modal? I was thinking about popping the modal view as you would normally, then posting an NSNotification to the 'Event' (for instance) screen's UIViewController subclass and telling it to pop that view as well. The other thing is that for the animation, it should be the dismissModalViewControllerAnimated animation (slide down) and not the popViewControllerAnimated animation (slide left). Thanks.

    Read the article

  • GridView take a Row

    - by GIbboK
    Hi I use asp.net 4 and c#. I have a GridView, I would like take a Row when in Edit Mode in my code and find a control. Here my code, but does not work, it takes only the first row for the GridView. Any ideas? protected void uxManageSlotsDisplayer_RowDataBound(object sender, GridViewRowEventArgs e) { switch (e.Row.RowType) { case DataControlRowType.DataRow: // Take Row in Edit Mode DOES NOT WORK PROEPRLY if (e.RowState == DataControlRowState.Edit) { Label myTest = (Label)e.Row.FindControl("uxTest"); } break; }

    Read the article

  • in asp.net mvc is it possible to register routes somewhere other than application.Start()

    - by joe q.
    Hi, is it possible to create and register routes after Application.Start() is called? let's say have a controller, PersonController. With default routing, URLs could look something like www.site.com/Person/Edit/4, with 'Person' matching the controller. now imagine I have several users, some may prefer we use the term 'Friends'. I would like to use the same controller, and have /Friends/Edit/4 map to the same controller/action/id. Maybe someone else prefers /Comrades/Edit/4. with the naming preferences stored in a database, is there a way that I can dynamically create these routes at some point mid-application, after the user has logged in? thanks!

    Read the article

  • How to update user info with restful_authentication plugin in Rails?

    - by benoror
    Hi people, I want to give the users to change their account info with restful_authentication plugin in rails. I added this two methods to my controller: def edit @user = User.find(params[:id]) end def update @user = User.find(params[:id]) # Only update password when necessary params[:user].delete(:password) if pàrams[:user][:password].blank? respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(@user) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end Also, I copied new.html.erb to edit.html.erb. Considering that resources are already defined in routes.rb I was expecting it to work easily, bute somehow when I click the save button it calls the create method, instead of update, using a POST http request. Any ideas?

    Read the article

  • Is it any desktop wysiwyg editors for mediawiki / wiki available?

    - by Eye of Hell
    Hello. Mediawiki is very good, but for programming tasks editing it via web is not very handy since wysiwyg is very limited, pressing 'edit' + 'publish' on any small change and waiting for page loading kins of annoying. I have seen alot of desktop wikis (personal wikis) that are free from such problems. The best example is a 'wikidpad' that has a usage patter of 'focus, edit wiki in-place, minimize'. This is very handy for programming work where you need to make small changes to wiki and documentation during development, and documentation is written much more often than readed :). But all such desktops wikies are personal - they don't have any wiki sharing (or marginally limited support for it). So, maybe it's a desktop application exists that can connect to mediawiki and allows to view and edit it via a rich wysiwyg editor? Any hints are welcomed.

    Read the article

  • Passing a JavaScript variable to a helper method

    - by Brendan Vogt
    I am using ASP.NET MVC 3 and the YUI library. I created my own helper method to redirect to an edit view by passing in the item's ID from the Model as such: window.location = '@Url.RouteUrl(Url.NewsEdit(@Model.NewsId))'; Now I am busy populating my YUI data table and would like to call my helper method like above, not sure if it is possible because I get the item's ID by JavaScript like: var formatActionLinks = function (oCell, oRecord, oColumn, oData) { var newsId = oRecord.getData('NewsId'); oCell.innerHTML = '<a href="/News/Edit/' + newsId + '">Edit</a>'; };

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >